This document is a brief description of how the experimentation tool can beused to performed an approximate analysis of a term. This is still preliminary work and not the overall procedure we intend to follow eventually.

Suppose you have entered a term and are on the initial page of the experimentation tool displaying its skeleton. First, solve any number of constraints with unify-β.

At some point, stop doing this, and click the "NLin*" link (second
link on the fourth line). This runs the implementation of an
algorithm which supposedly adds the minimal number of occurrences of
the "!" constructor to make the constraint set solvable without doing
non-trivial expansions.

This algorithm includes a test (which is not an occurs check) which
guarantees termination, but we have some worries that it compromises
completeness, i.e., it will fail on some solvable examples; I haven't
yet had time to try and build a counterexample (be sure to report it
if you find such an example!).

The "NLin" link does just one step of what "NLin*" does repeatedly
until done. If you have acces to the Church CVS repository, then
you can find a draft including the rewrite rules for NLin under
`papers/principal-typings/system-E-approximate-analysis`.

To proceed with the analysis, press "Flatten", which will erase all expansion variables (and simultaneously rename all type variables); obviously this is the part of the procedure that we want to get rid of eventually.

At this point, you should have many constraints of the form "a1 <= a2", where a1 and a2 are distinct type variables. Click on "unify" to run first-order unification (which includes an occurs check, so you may be left with unsolved constraints; when this happens, you need to go back to before the use of NLin*, and do more unify-β steps).

So, suppose you start with the following term:

M = (\ f x -> f (f x)) (\ z w -> z)

If you do 0 unify-β steps and then complete the analysis with NLin*, Flatten, and Unify, you will be left with one constraint which cannot be solved without recursive types. This happens because M is not simply typable.

If you do 1 unify-β steps and complete the analysis, you get this typing:

a9 -> !a7 -> !a5 -> a9

If you do 2 unify-β steps, you get this more precise typing:

a4 -> w -> !a3 -> a4

If you do 3 unify-β steps (and don't run NLin*, Flatten, and Unify), all constraints are solved, and you get a principal typing of M:

e0 e0 e0 a0 -> e0 (w -> e0 (w -> e0 a0))

You can use "Flatten" to get a more legible typing:

a1 -> w -> w -> a1

Future work includes automatically solving constraints until the (solved and unsolved) constraints cannot be solved further without exceeding some finite rank. Another thing we are looking at is how unify-β and NLin steps can be interleaved.