Glasgow Functional Programming Seminars (1993-94)
Here are the titles, speakers and abstracts for all the Functional
Programming Seminars for 1993-94. You can also find general
information about the seminar
series and the current
schedule of talks.
For more information, contact John O'Donnell.
(Don't worry -- this page will be improved soon to make it more
readable.)
A Simple Type of Higher-Order Backward Analysis
Clem Baker-Finch
Cambridge University Computing Laboratory
Monday, 11 October 1993
"Reduction types" extend simple types by adding annotations to provide
information about the reduction behaviour of the terms with a given
type. A type assignment system for the case of strictness analysis
demonstrates the power and simplicity of this approach.
Following consideration of the semantics of reduction types, a
straightforward connection with projection-based backward analysis
arises. The result is naturally higher-order, a long-standing problem
for backward analysis.
Mon 18 Oct 1993
Robin Popplestone, Glasgow + University of Massachusetts at Amherst
Interactive implementation of Haskell in Poplog
My approach to porting the Glasgow Haskell Compiler to the Poplog environment
is to implement an interface at the STG level to Poplog Virtual Machine code.
In other words, replace the sequence of transformations:
Haskell -> ..... -> STG Code -> ... C ... Machine code
by:
Haskell -> ..... -> STG Code -> POPLOG VM Code -> Machine code
The capabilities of the Poplog VM match the requirements of STG quite
accurately, so the translation is not difficult.
Practically, to compile a Haskell module, a GHC process is spawned from Poplog
with compile-flags which specify that STG code is to be output, the output
being redirected into a file. The STG form is parsed by a small parser written
in -parse_gen-, thus reconstituting a POP-11 equivalent of the STG internal
form. PVM code is generated from the STG. Ultimately it should be possible to
pass the GHC itself through this process, giving a native Poplog
implementation.
Top-level functions in a module are bound to PVM variables created with
-sysVARS-. The names of these variables are drawn from the flat name-space
intended for the linker. This means that a Haskell -program- can usually be
modified and run simply by recompiling a -module- and calling the -main-
function. Interaction can be provided by providing POP-11 bindings for those
Haskell functions whose arguments can be derived from standard POP-11
datatypes, such as dynamic (i.e. tail-lazy) lists.
I will discuss the current state of this port and how the efficiency of PVM
code should compare with that generated from C.
Robin.
Mon 25 Oct 1993
John O'Donnell
Teaching circuit design with Haskell
Traditional courses in digital circuit design and computer
architecture present trivial circuits (4-bit adder, etc) in great
detail but they become vague when it comes to larger circuits. One
reason for this is the traditional notation: schematic diagrams are
unusable for nontrivial circuits.
I'm currently using Haskell to specify circuits in the third year
architecture module, and will shortly use it in the fourth year.
Functional circuit specification scales up well to large circuits, so
the students get to see precise designs for quite complex circuits
(including a complete CPU design!). This approach allows a successful
mix between informal methods (especially simulation) and formal
methods.
Mon 1 Nov 1993
David N. Turner, University of Edinburgh
Relating types in the lambda-calculus to types in the pi-calculus
There has recently been quite a lot of interest in developing type
systems for concurrent calculi, in particular for the pi-calculus. I
will present a simple type system for the pi-calculus, showing that
there is a precise correspondence between types in the pi-calculus and
types in the lambda-calculus. This is achieved using Milner's encoding
of the lazy lambda-calculus in the pi-calculus; we extend his result
by showing that the type structure of a lambda-term is preserved by
the encoding. In fact, we prove that the principal type of a
lambda-term is directly related to its encoding's principal type in
the pi-calculus.
Mon 8 Nov 1993
Jim Mattson
Primitives for Speculative Evaluation
Speculative evaluation refers to the ``early'' computation of results
which are somewhat likely to become useful, but which may actually
turn out to be a waste of time. Unfortunately, the term ``speculative
evaluation'' is overloaded. It may refer to either _implicit_
speculation, which is a feature of a particular implementation, or it
may refer to _explicit_ speculation, which is a characteristic of the
language itself. Implicit speculation is typically used in a parallel
implementation to take advantage of processing resources which would
otherwise be left idle. Explicit speculation, on the other hand, can
be used to concisely express non-deterministic algorithms which might
otherwise be overly complex (e.g. parallel searches).
In an FP talk last Spring, I discussed some of the implementation
details necessary to provide support for implicit speculative
evaluation for parallel graph reduction (a la GRIP). Fortunately,
once these mechanisms are in place, it is not very difficult to extend
the system to provide support for explicit speculative evaluation as
well. In this talk, I will ask the question, ``What sort of
primitives should a language (Haskell, of course) offer to support
explicit speculative evaluation?'' My answer will be biased towards
the ideologies of programmers and implementors, so I hope the
theoreticians will come prepared with rotten vegetables and ideas of
their own. :-)
Mon 15 Nov 1993
Kevin Hammond
The Changing State of Haskell 1.3
This will be a dicussion meeting at which I will describe the proposed
changes to Haskell to produce version 1.3, such as monadic I/O,
annotations, or strict data constructors, and I will solicit opinions
on these proposals.
Mon 22 Nov 1993
Robin Popplestone, Glasgow + University of Massachusetts at Amherst
Memoirs of a Computer Bum who More or Less Lucked Out
Robin will describe his early experiences in computing, from 1963 (The
Manchester Atlas -- Britain's last fling at hardware superiority) to 1970-75
(Exploiting POP-2 and Multipop) with many highlights in between!
1963 The Manchester Atlas - Britain's last fling at hardware superiority.
The Compiler Compiler - fine for simple code generation.... I realise that its
data-structures in stacked PAR's are too evanescent for symbolic computing.
The Oxford Summer School, 1964. I encounter LISP and CPL, Strachey and Landin,
hd and tl, garbage collection.
1964-66 Reverse Polish as a Language for Paupers: Cowsel becomes POP-1 by
fiat; rebinding function variables supports interactive languages. Rod
Burstall's revpol as a pointer to Algol-like syntax.
1966-68 POP-2: A quasi-functional language for general purpose computing.
Interaction with the Aberdeen group.
Incremental compilation of function definitions.
Closures - introduce extra arguments and partially apply - a hackeroo
which anticipates lambda-lifting.
Infinite lists as a clean way to present input - an implementation of
Landin's ideas. Burstall's implementation of functional grammars,
and its extension to operations on relations.
Method directed garbage collection - mark methods and relocate methods.
The zoo rejected.
Encapsulation of state using the -updater- concept.
Arrays as updatable functions.
The compiler as a user-callable function.
1968-70 Multipop 68: an operating system based on quasi-functional
programming.
Closures encapsulate devices.
Process records.
One garbage collector for all processes.
General purpose continuations.
1970-75 Exploiting POP-2 and Multipop.
Track 77: Departmental news occasionally veers towards libel.
Bob and Jay: their program prover; variable sharing helps point the
way to Prolog implementation. They even find time for string-matching.
Freddy-the-robot: robotics has never been so easy since.
Rod and John: program transformation as an approach to correctness.
A HOPEful way forward to more rigorous functional programming.
Mon 29 Nov 1993
Muffy Thomas
An Experience with the Larch Prover:
A Proof of Incorrectness and What I Did With It.
In this talk I will describe my attempts to reason about a piece of
assembler psuedocode, using an equational specification (a functional
program to you!) of the problem and the equation based theorem prover
LP.
I will describe my specification, or abstraction of the code, my
correctness criteria, and how I tried to use LP to prove
correctness. The code however, was *not* correct, and the proof
failed. But, the failure was very interesting and informed me of how
to correct the code, and in the end, I could prove that the modified
code was correct.
This particular pseudocode is of great interest because it is part of
the editing software from the Therac-25: the linear accelerator
responsible for several deaths through faulty software.
Mon 6 Dec 1993
Guy Argo
Efficient Laziness
Guy will summarise the results of his PhD thesis research on
optimisations to the TIM abstract machine.
Mon 13 Dec 1993
Impromptu discussion: Black holes, identifying thunks, etc.
Wed 15 Dec 1993
3pm, Conference Room
Alan Jeffrey, University of Sussex
A Fully Abstract Semantics for Concurrent Graph Reduction
In this talk I will discuss how Abramsky and Ong's theory of the lazy
lambda calculus can be applied to a model of concurrent graph
reduction. The lazy lambda calculus is a sequential model of
left-most outer-most reduction in the untyped lambda calculus, without
sharing. Abramsky and Ong have shown that this has a fully abstract
semantics in the domain $D \simeq (D \rightarrow D)_\bot$. In this
talk I will present a simple model of concurrent graph reduction, and
show how Abramsky and Ong's techniques can be adapted to show that $D$
is the fully abstract model for concurrent graph reduction.
This work is a continuation of work I talked about last year, and is
closely related to John Launchbury's semantics of sharing.
Mon 10 Jan 1994
Andy Gill and Simon Marlow
Happy Talk: An LALR parser generator in Haskell
Happy is a parser generator, styled on Yacc, but for Haskell
programmers. We will give a complete presentation of the wonders of
Happy, from two complementary angles. First we will give a sales
talk, explaining how to use happy to generate parsers quickly and
easily. Some comparisons to the more typical combinator parsing will
be given. Secondly we will lift the lid off Happy. Happy is written
in Haskell, and we will talk about its conception, implementation, and
techniques used.
Mon 17 Jan 1994
Will Partain
Recent developments in the Glasgow Haskell Compiler
I will review the changes to the Glasgow Haskell compiler between
versions 0.16 and 0.19, i.e., in the last half of 1993. The emphasis
will be on what's actually there, rather than what Simon would tell
you if he were giving the talk.
Mon 24 Jan 1994 ** 4pm, Room F171 **
Murray Cole, University of Edinburgh
Parallel Programming with List Homomorphisms
We review the use of the Bird-Meertens theory of lists as a
foundational model for parallel programming and consider the
application of the "Homomorphism Lemma" in this context. In
particular, we investigate the informal methodology which arises when
the lemma is not immediately applicable to a problem, with examples
including the old favourite "maximum segment sum" and a family of
parenthesis matching problems. Our intention is to illustrate the
benefits which arise from taking a little theory with a pinch of
pragmatic salt, and to ask whether the intuition which results might
itself be amenable to formalisation.
Mon 31 Jan 1994
David Murphy, University of Birmingham
D.V.Murphy@uk.ac.bham.cs
Located Processes and Located Processors
Many distributed systems consist of a fixed collection of sequential
processes communicating by interprocessor synchronisation. We
introduce a process algebra for describing instances of this claass
where the location of processes is important. Two operational
semantics are presented for the algebra: one which captures the
location of actions, and one an interleaving one as usual.
Bisimulation is defined over located and conventional transition
systems.
A denotational semantics for the algebra is presented which allows
some information about the identity of locations to be ignored (so
that two places can be identified, for instance). This semantics is
shown to lie between the two bisimulations, and to coincide with each
if certain properties of the identification are satisfied.
Our two operational semantics are then extended to the weak case,
allowing internal $\tau$ actions to be disregarded, and sound and
complete axiomatisations are presented for each. Our technique is
first to reduce terms to trees labelled over a structured label set,
and then to directly axiomatise bisimulation over these trees. This
approach allows us to extend a variety of standard results to the
located setting.
The operation of placing a distributed process on a fixed network of
processors is considered. A correct placing of a collection of
processes on a network is a map from locations to processors such that
interprocess communication can be implemented by interprocessor
communication. A calculus of placing which allows all and only correct
placings to be deduced is presented. This forms the basis of an
analysis of the properties of placed parallel processes with something
of the flavour of abstract interpretation.
(What is located concurrency? The idea is to study programs with an
explicit notion of location; put this computation here, that one
there, and so on. Ideas of equivalence for located systems are quite
subtle, and a matter of current interest. Furthermore, and probably of
more interest to you guys, there is some work on placing located
programs (i.e.~ones with an abstract notion of locality) onto real
fixed geometry networks of processors. This idea of placing is rather
similar to abstract interpretation---one wants to give a measure, not
of what the program does, but where it does it.)
Mon 7 Feb 1994 -- cancelled (external speaker on distributed systems)
Mon 14 Feb 1994 -- cancelled (University Holiday)
Mon 21 Feb 1994
Theo Norvell, Oxford University
Theo.Norvell@uk.ac.oxford.prg
A Logical Way to Write Functional Programs.
A functional programming language can be augmented with a
specification construct that allows the logical relationship between
the input and the result of a function to be stated. Such a construct
makes the language nondeterministic and compiling it infeasible.
However, it allows a convenient and concise specification of the
desired behaviour of a function.
All specification constructs can be removed from the program by a
process of refinement, that is, the replacement of program fragments
by equivalent or more deterministic fragments. The result is an
implementable functional program that meets the specification.
Mon 28 Feb 1994
Kirsten Solberg, Arhaus and Cambridge
Strictness and Totality Analysis
We define a novel inference system for strictness and totality
analysis. Our programming language is the simply-typed lazy
lambda-calculus with constants and fixed points. The information
about strictness amounts to identifying those terms that surely
denote bottom (i.e.\ do not evaluate to WHNF)\@. The information
about totality amounts to identifying those terms that surely do not
denote bottom (i.e.\ do evaluate to WHNF)\@. The analysis is
presented as an annotated type system allowing conjunctions only at
``top-level''\@. Examples of its use are given and the correctness
is proved with respect to a natural-style operational semantics.
Mon 7 March 1994 -- cancelled
Mon 14 March 1994
Simon Marlow and Andy Gill
The Glasgow Haskell team and the programming contest
Thu 14 April 1994
Luca Cardelli, DEC Systems Research Center
A Theory of Primitive Objects
3pm, F171
All common object-oriented languages support some combination of
object subsumption and method override, and most handle these features
properly. However, type correctness is achieved via rather subtle
conditions, if at all. We hope to illuminate the origin of some of
these conditions, and illustrate the pitfalls that they avoid.
In semantics and type-theory, where one aims for simplicity and
generality, the combination of subsumption and override has proven
difficult to model. We provide basic object calculi that support those
features. Our approach is to axiomatize the typing and equational
properties of objects, rather than to define objects in terms of other
constructs. The soundness of our rules is guaranteed by a denotational
semantics, and by a syntactic translation inspired by the semantics.
We begin our paper with a challenge: finding an adequate type system
for an untyped object calculus. This calculus is patently
object-oriented: it has built-in objects, methods with self, and the
characteristic semantics of method invocation and override. The
calculus is also very simple, with just four syntactic forms, but
expressive enough to encode the lambda-calculus and to formulate
object-flavored examples in a direct way.
After describing the untyped calculus, we define type systems,
equational theories, and denotational semantics. We account for
subsumption, field update, method override, and the so-called Self
type, in addition to standard bounded polymorphism and data
abstraction. Classes, inheritance, and method combination can be
layered on top of our basic framework. Our final formal system is
obtained as a conventional second-order extension of a rather
elementary first-order theory of objects.
This is joint work with Martin Abadi
Fri 15 April 1994
Luca Cardelli, DEC Systems Research Center
Obliq: A Language with Distributed Scope
3pm, F171
Obliq is a lexically-scoped untyped language that supports distributed
object-oriented computation. An Obliq computation may involve multiple
threads of control within an address space, multiple address spaces on
a machine, heterogeneous machines over a local network, and multiple
networks over the Internet. Objects are local to a site, while
computation can roam over the network.
The most novel feature of Obliq is its combination of distributed
computation and lexical scoping. Lexical scoping here assumes a new
role: it ensures that computations have a precise meaning even when
they migrate over the network: a meaning that is determined by the
binding location and network site of identifiers, and not by execution
sites. Network-wide lexical scoping becomes an issue in presence of
higher-order distributed computation, as in the situation of remote
sites accepting procedures for execution.
Distributed computation interacts well with the notion of objects.
For example, network services normally accept a variety of messages;
it is then natural to see each service as a network object (or, more
neutrally, as a network interface). Obliq supports objects in this
spirit, relying for its implementation on Modula-3's network objects.
The Obliq object primitives are designed to be simple and powerful; a
main trait of the language is the relation between local and
distributed semantics of these primitives. Obliq objects are
collections of named fields, with four basic operations:
selection/invocation, update/override, cloning, and delegation. Every
object is potentially and transparently a network object. An object
may become accessible over the network either by the mediation of a
name server, or by simply being used as the argument or result of a
remote method.
Obliq computations, in the form of procedures or methods, can be
freely transmitted over the network. Actual computations
(i.e. closures, not source text) are transmitted; lexically scoped
free identifiers retain their bindings to the originating
sites. Through these free identifiers, migrating computations can
maintain connections to objects and locations residing at various
sites. Disconnected agents can still be represented as procedures with
no free identifiers.
Mon 18 April 1994
Benjamin Pierce, LFCS, University of Edinburgh, bcp@dcs.ed.ac.uk
PICT: AN EXPERIMENT IN CONCURRENT LANGUAGE DESIGN
PICT is a programming language in the ML tradition, formed by adding a
layer of convenient syntactic sugar and a static type system to a tiny
core.
The core language, Milner's pi-calculus, has been used as a
theoretical foundation for a broad class of concurrent computations.
The goal in PICT is to identify high-level idioms that arise naturally
when these primitives are used to build working programs -- idioms
such as basic data structures, protocols for returning results,
higher-order programming, selective communication, and concurrent
objects.
The type system integrates a number of features found in recent work
on theoretical foundations for typed object-oriented languages:
higher-order polymorphism, simple recursive types, subtyping, and a
powerful partial type inference algorithm.
Though many issues remain to be investigated, we have reached a stable
design for the base language, an acceptably efficient uniprocessor
implementation, and a small but growing collection of libraries. An
object-oriented graphics toolkit is under construction.
Mon 25 April 1994
Hans Wolfgang Loidl
ADAM & EVE: The Automatic Parallelization of a Small Functional Language
by Using the Dataflow Model
The approach for the parallelization of the functional programming language
EVE in the ADAM & EVE project has been to use the dataflow model of
computation to create a hybrid abstract dataflow/controlflow machine (ADAM)
that can be efficiently implemented on stock hardware. The EVE programming
language is a small purely functional programming language that lacks most
of the syntactic sugar of usual functional languages but it contains all
important concepts of functional languages such that the missing features
(like pattern matching, list comprehensions etc.) can be expressed by the
existing features. Prototypes for the ADAM machine and the EVE compiler
have been implemented on a transputer network and on a PC, respectively.
In this talk the compilation process for translating purely functional EVE
programs into the assembler code of the hybrid dataflow/controlflow machine
ADAM will be discussed. This compilation process uses dataflow graphs as an
intermediate language in order to make the implicit parallelism of the
functional program explicit. Finally, analyses of the dynamic behaviour of
some functional programs that have been executed in parallel will be
presented.
Mon 2 May 1994 -- cancelled (University Holiday)
Mon 9 May 1994
Two short talks:
(1) Simon Booth, Stirling
Interactive screen editor in Miranda
This paper explores the development of an interactive screen editor,
in the functional programming language Miranda, from a specification
written in Z. The program makes use of a set of interactive functions
especially developed to aid the writing of interactive programs (in
Miranda). The editor developed is fully featured and could be used to
edit text, although the execution speed is rather poor.
(2) Simon Marlow
A extension to the Haskell type system supporting
full second-order polymorphic types.
I will present a type system that is a superset of the existing
Haskell type system, in that all existing Haskell programs are typable
without modification. With some minor additional syntax, it is
possible to write functions with general second-order polymorphic
types and have them checked by the compiler. I will present both the
typing rules and a type checking/reconstruction algorithm for the
system. This is work in progress.
Mon 16 May 1994
Refining Reduction in lambda calculus and Type Theory
Fairouz Kamareddine
In this talk, I describe a lambda calculus notation which enables us
to detect in a term, more beta redexes than in the usual notation. On
this basis, we define an extended beta reduction which is yet a
subrelation of conversion. The Church Rosser property holds for this
extended reduction. Moreover, I show that we can transform
generalised redexes into usual ones by a process called ``term
reshuffling''.
The results are even stronger. Generalising reduction, can be done
not only for the type free lambda calculus as above, but also for the
typed one. I consider therefore, Church's type theory using the item
notation and with extended redexes. All the results of Church's type
theory (that is, Church Rosser, Subject reduction, Strong
Normalisation and more) hold for our generalised reduction. It is
moreover the case that term reshuffling preserves typing.
\title{Refining reduction in the $\lambda$-calculus}
\author{Fairouz Kamareddine}
\documentstyle[11pt,fleqn,a4wide]{article}
\begin{document}
\maketitle
In the $\lambda$-calculus as we know it, some redexes in a term may not
be visible before other redexes have been contracted. For example, in
$t \equiv ((\lambda_{x_7}.(\lambda_{x_6}. \lambda_{x_5}.x_5x_4)x_3)x_2)x_1$,
only $(\lambda_{x_6}. \lambda_{x_5}.x_5x_4)x_3$, and
$(\lambda_{x_7}.(\lambda_{x_6}. \lambda_{x_5}.x_5x_4)x_3)x_2$ are visible.
Yet when reducing $t$ to a normal form, a third redex must be contracted;
namely $(\lambda_{x_5}.x_5x_4)x_1$. This third redex is not immediately
visible in $t$.
To solve this problem we switch from the classical notation to what we
call {\em item notation} where the argument occurs before the function
and where parenthesis are grouped in a novel way. In our item
notation, $t$ will be written as $(x_1 \delta)(x_2
\delta)(\lambda_{x_7})(x_3 \delta)(\lambda_{x_6}) (\lambda_{x_5})(x_4
\delta)x_5$ and we can provide $t$ in this item notation with a
bracketing structure $[ [ \ ] [ \ ]]$ where $\delta$ and $\lambda$
correspond to $[$ and $]$ respectively (ignoring $(x_4 \delta)x_5$).
We extend the notion of a redex from being any opening bracket $[$
next to a closing bracket $]$, to being any pair of matching $[$ and
$]$ which are separated by matching brackets. Hence we see
immediately that the redexes in $t$ originate from the $\delta
\lambda$-couples $(x_2 \delta)(\lambda_{x_7})$, $(x_3
\delta)(\lambda_{x_6})$ and $(x_1 \delta)(\lambda_{x_5})$. This
natural matching was not present in the classical notation of $t$.
With item notation, we shall refine reduction in two ways:
\begin{enumerate} \item We shall generalise $\beta$-reduction so that
any redex can be contracted and hence we can contract the redex based
on $(x_1 \delta)(\lambda_{x_5})$ before we contract any of the redexes
based on $(x_2 \delta)(\lambda_{x_7})$ and $(x_3
\delta)(\lambda_{x_6})$. That is, the $\beta$-rule changes from $(t_1
\delta)(\lambda_v)t_2 \rightarrow_{\beta} t_2[v:=t_1]$ to $(t_1
\delta)\overline{s}(\lambda_v)t_2 \leadsto_\beta
\overline{s}(t_2[v:=t_1])$ for $\overline{s}$ having a matching
bracketing structure. I.e.\ $(t_1 \delta)\overline{s}(\lambda_v)$ is
a redex in our generalised sense. So\\ \[
\begin{array}{ll}
(x_1 \delta)(x_2 \delta)(\lambda_{x_7})(x_3 \delta)(\lambda_{x_6})
(\lambda_{x_5})(x_4 \delta)x_5 & \leadsto_\beta\\
(x_2 \delta)(\lambda_{x_7})(x_3 \delta)(\lambda_{x_6})
(((x_4 \delta)x_5)[x_5 :=x_1]) & \equiv\\
(x_2 \delta)(\lambda_{x_7})(x_3 \delta)(\lambda_{x_6})(x_4 \delta)x_1 &
\end{array}
\]
We show moreover, that the Church Rosser property holds for the
generalised $\beta$-reduction.
\item
An alternative to the generalised notion of $\beta$-reduction can be
obtained by keeping the old $\beta$-reduction and by {\em reshuffling}
the term in hand. So we can reshuffle the term $(x_1 \delta)(x_2
\delta)(\lambda_{x_7})(x_3 \delta) (\lambda_{x_6})(\lambda_{x_5})(x_4
\delta)x_5$ to $(x_2 \delta)(\lambda_{x_7})(x_3
\delta)(\lambda_{x_6})(x_1 \delta) (\lambda_{x_5})(x_4 \delta)x_5$, in
order to transform the bracketing structure $[ [ \:] [ \:]]$ into $[ \
][ \ ][ \ ]$, where all the redexes correspond to adjacent $[$ and
$]$. Such a reshuffling is more difficult to describe in classical
notation. I.e.\ it is hard to say what exactly happened when
$((\lambda_{x_7}.(\lambda_{x_6}. \lambda_{x_5}.x_5x_4)x_3)x_2)x_1$, is
reshuffled to
$(\lambda_{x_7}.(\lambda_{x_6}. (\lambda_{x_5}.x_5x_4)x_1)x_3)x_2$.
This is another attractive feature of our item notation which we shall
also describe in this paper studying its characteristics.
\end{enumerate} \end{document}
Mon 23 May 1994
John Launchbury
Functional depth first search algorithms
Depth-first search is often presented as a process, moving from one
node of a graph to another. An alternative view is to see it as a
specification for a tree-like value: the DFS forest, with algorithms
being defined over the forest. This leads to a compositional approach
to DFS algorithms quite unlike traditional treatments.
It has other benefits too. The style of program we obtain is quite
amenable to formal proof, which I will exemplify through a
calculational-style proof of a strongly-connected components
algorithm.
Mon 30 May 1994 -- cancelled (University Holiday)
Mon 6 June 1994 -- cancelled (conflict with staff meeting)
Monday 13 June
The Barendregt cube with definitions
Roel Bloo
In the Barendregt cube, there is no possibility to introduce
definitions which are abbreviations for large expressions and which
can be used several times in a program or a proof. This possibility is
essential for practical use, and indeed implementations of systems of
the cube, such as Coq, Lego, and HOL do provide this possibility.
Definitions are name abbreviating expressions and occur in contexts
where we reason about terms. For example, ``Let id = (\ x:A.x): A -->
A in (\y:A -->A.id id'' defines id to be \x:A.x in a complex
expression in which id occurs many times.
The intended meaning of a definition is that the definiendum x can be
substituted by the definiens a in the expression b. In a sense, an
expression ``let x : A be a in b'' is similar to ``(\x:A.b)a''. It is
not intended however to substitute all the occurrences of x in b by a.
Nor is it intended that the definition be a part of our term. Rather,
the definition will live in the environment (or context) in which we
evaluate or reason about the expression.
Advantages of the definition ``let x : A be a in b'' over ``(\ x:A.b)a''
are :
1. It is convenient to have the freedom of substituting only some of
the occurrences of an expression in a given formula.
2. Defining x to be a in b can be used to type b. For example,
without definitions, it is not possible to type \y:x.\f:a-->a.fy even
when we somehow know that x is an abbreviation for a. This is
because, f expects an argument of type a, and y is of type x and there
is no way to use the information that x is an abbreviation for a.
3. Typing algorithms for terms in the Cube with definitions become
much shorter than the algorithms in the abscence of definitions.
In this talk, we will describe the extension of the eight type systems
of the Cube with definitions in an elegant way and will show that all
the important properties the Cube still hold. Of these peoperties, we
are interested in Church Rosser, Subject Reduction, Unicity of Typing,
and Strong Normalisation.
John
O'Donnell, jtod@dcs.glasgow.ac.uk