My research focuses on the further development of two reasoning tools: types and rewriting. In this work, for the most part I am guided by one theme: making reasoning about software more compositional.
To build more reliable, secure, efficient, and scalable software, we will forever want to improve our reasoning methods. Many current reasoning methods work best on entire programs or systems. However, in practice large software systems are assembled from separately designed pieces that are updated at different times. When the pieces are considered separately and the results are combined without reinspecting the pieces, reasoning is compositional, and can be done in a modular and incremental fashion. In particular, compositional analysis calculates estimates of software properties in a smallest-piece to largest-piece order without dependencies between distinct pieces; this helps with separate compilation and incremental (re)analysis. These motivations inspire my interest in making existing reasoning methods more compositional.
Types support reasoning about properties of statements in programming languages, natural languages, logics, etc. Types are widely used in informatics (e.g., programming languages, mobile computation, databases, real-time systems, etc.) for organizing reliable implementations. In programming languages, types can ensure some aspects of program properties like safety, security, reliability, modularity, etc. In compilers, types can justify optimizations by analyzing properties like termination, strictness/totality, memory effects, time behavior, possible exceptions, etc.
A type system needs polymorphism to be flexible enough for safe generic code reuse, abstract datatypes, and/or accurate program analyses. To obtain polymorphism, formal type systems have generally used ∀ (``for all'') quantifiers and related methods (e.g., ∃ (``there exists'') quantifiers). An alternative is to instead use the intersection type constructor for polymorphism. Intersection type systems have advantages over systems with ∀ quantifiers that include greater flexibility (accepting far more programs and analyzing them far more precisely) and also principal typings , the property that each program fragment has a best analysis. (This must not be confused with the much weaker property commonly called ``principal types''.) I have proven that systems with ∀ quantifiers but without intersection types generally do not have principal typings.
It is desirable for a type system to have automated type inference to relieve human programmers of the tedious and sometimes tricky burden of type-annotating their programs. The most widely used type inference algorithm is Milner's W for the Hindley/Milner system (used in languages such as Haskell, OCaml, Standard ML, etc.), which uses ∀ quantifiers to enable some simple cases of generic code reuse. Algorithm W is not compositional and I have proven Hindley/Milner does not have principal typings. For a type system without principal typings, compositional analysis within the system is generally impossible. This helps to inspire my interest in developing intersection type systems that are very expressive and support type inference.
Rewriting theory applies the rules of computation, interaction, logic, mathematics, etc., in a stepwise manner. Rewriting supports reasoning about things like computation strategies and equivalences between programs or propositions. If a rewriting system (a set of terms and rewriting rules) can be shown to have certain technical properties, e.g., confluence and standardization, then it can be more easily used in compositional reasoning.
The most significant rewriting system for reasoning about programs is Church's λ-calculus, which defines functions via the rewriting rule called β-reduction. The λ-calculus elegantly supports powerful reasoning about program meanings and program transformation correctness. Because the λ-calculus has confluence and standardization, β-reduction steps preserve meaning in any context and thus support compositional reasoning. However, β-reduction is usually inapplicable in real languages with features like state (assignable variables), input/output, exceptions, jumps, interrupts, coroutines, etc. Although imperative features can be encoded in the λ-calculus (e.g., with state-passing style), such encodings are difficult to reason about. Other vital real-world features like classes, objects, modules, object-file libraries, etc., can also be encoded in the λ-calculus, but face the same reasoning difficulties.
There has been some research on rewriting systems that support real-world programming features without losing the compositional equational reasoning power of the λ-calculus. To help organize useful program transformations (e.g., in compilers), it is desirable to integrate this research to make rewriting systems that are easy to reason about and can straightforwardly express all of the features of real-world systems. These motivations inspire my interest in developing rewriting theory that supports easily designing correct rewriting systems for reasoning about complicated features.
This section explains how the research I have done so far fits together, and can be viewed as a ``plain English'' version of my publication list. Much of my work has been in the context of the Church Project, an ongoing, international, multi-institution group investigating advanced types and semantics. Some of my recent work has also been in the context of the similar DART project (``Dynamic Assembly, Reconfiguration, and Type-checking'').
My research began with type inference for the System F of Girard and Reynolds (i.e., types with ∀ quantifiers) with implicit types. I proved typability for System F is undecidable [14,7,2], solving what was called ``an embarrassing open problem'' by Robin Milner (the 1991 Turing Award winner, quoted by Henk Barendregt). Thus, any System F type inference algorithm is incomplete or non-terminating. I also proved for related systems the undecidability of typability [3,7,8] and subtyping [7,4]. Significantly, I developed the only system-independent definition of principal typings and proved they are missing  from both System F and its widely used restriction, the Hindley/Milner system. Thus, type inference for these systems can not be compositional while staying within the system. On the positive side, I co-developed a type inference algorithm for System F restricted to rank-2 types .
Having discovered many technical problems for type inference with ∀ quantifiers, my research turned toward intersection types. To improve research into intersection types world-wide, I established the ITRS (Intersection Types and Related Systems) workshop series and ran the first workshop in 2000 ; subsequent events have been in 2002 and 2004.
For the experimental CIL compiler's intermediate program representation, I developed the λCIL calculus with intersection and union types supporting polyvariant flow information [29,9]. Using λCIL, I specified flow-directed representation transformations that use conflicting representations  for compiler optimizations . This allowed optimizations not previously expressible and was implemented in CIL [27,26]. Based on lessons from CIL, I designed branching types, an alternative to intersection types for use in compilers and program manipulation tools [30,82].
I co-developed System I, an intersection type system with expansion variables, a new type feature for flexible type inference and easier implementation, and developed a unification-based type inference algorithm for a non-rank-restricted intersection type system [17,47,33]. Recently, I co-developed System E with intersection types, expansion variables, and a linear/non-linear distinction in types [53,55,54], with the goal of accurate resource usage tracking. I hope to combine System E with branching types in a simple compiler to test it. Other work has included finding the exact type inference complexity for all finite-rank restrictions of intersection type systems  and type error slicing [43,51] for accurately reporting type error locations without confusing the programmer .
To reason about the essence of modules, I designed the m-calculus of mixin modules and related it to many programming languages [19,13]. To better support imperative programs, I helped develop the MM calculus of call-by-value mixin modules, its semantics, and its type system that verifies well-founded linking of mutually recursive definitions . As technology underlying MM, I helped develop a generalized semantics and implementation methodology for call-by-value letrec expressions . I helped develop a scheme [34,58] for adding semi-formal semantic information to module types to allow checking semantically correct usage and to support automatic semantics-driven interface adaptation; the key underlying technology is the new notion of unification modulo observational equivalence.
I developed various methods for proving standardization for rewriting systems representing programming languages [20,21]. I designed an abstract framework for proving meaning preservation for rewriting systems with neither confluence nor standardization  [37,36] . I hope to use this framework for an imperative language. I have also related explicit substitution and cut elimination in logical deduction [15,23], joined explicit substitution and generalized β-reduction , and proved strong normalization via weak normalization .
I co-developed PolyA [57,56,45,35], a flexible polymorphic type system with principal typings for ambient calculi (which are used to reason about mobile computations). Inspired by manipulation of recursive types in the CIL compiler, I co-developed better methods for working with fold (the catamorphism generator) and unfold (the anamorphism generator) on regular trees (which can be infinite but have a finite number of distinct subtrees) . I co-supervised development of a scheme for using weak types to help structure mathematical texts [48,49].
This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.71)
The translation was initiated by Joe Wells on 2005-01-20.