Glasgow Functional Programming Seminars (1992-93)

Here are the titles, speakers and abstracts for all the Functional Programming Seminars for 1992-93. You can also find general information about the seminar series and the current schedule of talks.

For more information, contact John O'Donnell.

(Don't worry -- this page will be improved soon to make it more readable.)


A Simple Type of Higher-Order Backward Analysis
Clem Baker-Finch
Cambridge University Computing Laboratory
Monday, 11 October 1993

------------------------------------------------------------------------------- 12 Oct 1992 John Launchbury A Natural Semantics of Lazy Evaluation -------------------------------------------------------------------------------
19 Oct 1992 John O'Donnell Three impossible things before dinner: ESF arrays Three different ways to generalise ordinary arrays have played a significant role in algorithms and data structures: extensible arrays, sparse arrays and functional arrays. Ordinary array access takes constant time, which is one reason arrays are so useful. Unfortunately, all three generalisations are difficult to implement efficiently. I conjecture that for each generalisation, there does not exist a Finite Random Access Machine algorithm that implements each operation in constant time. This talk presents two new results: 1. A new data structure, called an ESF array, that combines extensibility, sparseness and functionalness in a natural way. 2. A way to implement ESF arrays which guarantees that every operation takes constant time. (According to the conjecture, this might seem to be impossible!) Furthermore, whenever data becomes inaccessible it is reclaimed in constant time, and there are no space leaks. -------------------------------------------------------------------------------
2 Nov 1992 Alan Jeffrey, University of Sussex A chemical abstract machine for graph reduction Berry and Boudol's Chemical Abstract Machine is a method of describing the operational semantics of concurrent systems. In this talk, I will present parallel graph reduction of the lazy lambda-calculus with rec and strictness annotations as a Chemical Abstract Machine, and show that the resulting testing semantics is adequate wrt testing equivalence for the lazy lambda-calculus. I will also present a pi-calculus implementation of the graph reduction machine, and show that the resulting testing semantics is also adequate. -------------------------------------------------------------------------------
9 Nov 9 1992 Simon Jones Compile time garbage collection -------------------------------------------------------------------------------
16 Nov 1992 Phil Wadler A taste of linear logic Some of the best things in life are free; and some are not. Truth is free: having proved a theorem, you may use this proof as many times as you wish. Food has a cost: having baked a cake, you may eat it only once. In this sense, traditional logic is about truth, while linear logic is about food. This talk will describe connections between traditional logic and the type systems used in functional languages such as ML and Haskell. It will then go on to describe a type system based on linear logic, and its application to improving the expressiveness and performance of functional languages. The talk is aimed at people with only a minimal knowledge of functional programming. Curious non-functional programmers are encouraged to attend! -------------------------------------------------------------------------------
23 Nov 1992 Uday Reddy, University of Illinois Curry-Howard isomorphism A Curry-Howard isomorphism is an isomorphism between a programming language and a constructive logic. The types of the programming language correspond to propositions in the logic, and programs correspond to proofs. This form of correspondence was initially applied in computer science to formal methods: specification and program construction (as evidenced by Nuprl and other type-theoretic frameworks). Newer applications are now emerging in programming languages as well as logic. Girard's linear logic, in particular, brings this to focus. All of this requires a fresh look at the Curry-Howard isomorphism. It turns out that deeper mysteries are locked inside this theoretical result. An examination of them broadens our understanding of programming languages as well as that of logic. -------------------------------------------------------------------------------
30 Nov 1992 Uday Reddy, University of Illinois Programming language applications of linear logic Girard's linear logic has recently evoked much interest for its programming language applications. The most notable are the application of intuitionistic linear logic for functional programming with "dynamic" data, and the use of full linear logic for concurrent process calculi. (These applications treat linear logic as a "constructive" logic: proofs as programs and propositions as types). We continue this study by looking at the application of full linear logic to functional programming and logic programming contexts. We show that linear logic gives a programming language that falls somewhere between functional and logic programming paradigms. - It is more expressive than functional programming in that it adds features similar to "logic variables". - It is a restricted form of logic programming where the logic variables are "directed". In a sense, linear logic gives an "ideal integration" of functional and logic programming paradigms where the notations as well as the properties of the paradigms are preserved. (This talk is intended for a department-wide audience. No previous familiarity with linear logic is assumed). -------------------------------------------------------------------------------
7 Dec 1992 Patrick Sansom Generations of Functional Languages Generational garbage collection ideas have been around since 1983, but have not been used in lazy functional programming language implementations. This talk describes the basic generational garbage collection ideas and examines the reasons why they have not been incorporated into lazy FP systems. I go on to discuss the generational garbage collection scheme I have implemented for our Haskell system and present some encouraging results. -------------------------------------------------------------------------------
14 Dec 1992 Ian Poole, MRC Human Genetics Unit, Edinburgh Image Cytometry Applications in Haskell The HGU is involved in a DTI funded project entitled "Safety Assurance in Diagnostic Laboratory Imaging", with the units's cervical monolayer pre-screening system serving as exemplar. We intend to adopt Haskell as a formal specification and implementation language in the project. In this talk I'll describe some of the pragmatic approaches we have developed in order to implement image analysis applications in Haskell with useable efficiency. We have adopted the monadic model of I/O for control an automated microscope and image capture hardware --- I'll describe our early experience. -------------------------------------------------------------------------------
25 Jan 1993 John Launchbury A synopsis of POPL93 -------------------------------------------------------------------------------
1 Feb 1993 Cordy Hall Optimising list representations in a lazy language -------------------------------------------------------------------------------
8 Feb 1993 Short talks: 1. David King An equivalence of sorts 2. Andy Gill Deforestation with build/foldr -------------------------------------------------------------------------------
22 Feb 1993 Simon Marlow Update avoidance analysis -------------------------------------------------------------------------------
15 Mar 1993 John O'Donnell Bidirectional fold and scan Bidirectional fold generalises foldl and foldr to allow simultaneous communication in both directions across a list. Bidirectional scan calculates the list of partial results of a bidirectional fold, just as scanl and scanr calculate the partial results of a foldl or foldr. Mapping scans combine a map with a scan, and often simplify programs using scans. The bidirectional functions are useful both for specifying digital circuits and for expressing bidirectional looping constructs. This family of functions is significant because it expresses important patterns of computation, it simplifies equational reasoning, and it supports parallel programming. The bidirectional mapping scan motivates Scansort, a new functional implementation of Quicksort that faithfully models the efficient partitioning and data movement of Quicksort. -------------------------------------------------------------------------------
10 May 1993 Jim Mattson An Effective Speculative Evaluation Technique for Parallel Supercombinator Graph Reduction Supercombinator graph reduction is a popular implementation technique for non-strict functional programming languages, and the approach is well-suited to parallel processing. Speculative evaluation is a strategy for increasing the parallelism of some programs by assigning potentially useful computations to idle processors before the computations become necessary to the execution of the program. Complications arise when potentially useful speculative tasks become either necessary or irrelevant, because the effects of each state change must propagate to all of the task's descendents. These issues grow increasingly difficult to manage in the presence of shared subgraphs. This talk discusses a low-impact model of speculative evaluation for loosely-coupled shared-memory MIMD systems, with an emphasis on minimizing the impact of speculative reduction on the evaluation of conservative tasks. The premise for this model is that speculative evaluation can be an effective optimization technique as long as the processors and other resources that would otherwise be idle bear most of the overhead of speculation. However, reasonable efficiency and the ability to exploit sufficient speculative parallelism are necessary to yield noticeable improvements in program performance. A prototype implementation of the model has been developed for a 122-processor BBN TC2000 "butterfly," and the results of several benchmark programs will be discussed. The impact of speculative evaluation on conservative reduction is assessed by comparing the execution times for the same benchmark executed with both conservative and speculative evaluation strategies. The system's efficiency and ability to exploit sufficient speculative parallelism are assessed by comparing the speculative execution times with conservative execution times for a "prescient" version of each benchmark. The results from the benchmarks indicate that speculative evaluation is an effective optimization technique for parallel supercombinator graph reduction and that the benefits of speculative evaluation can outweigh the overhead incurred. -------------------------------------------------------------------------------
17 May 1993 Patrick Sansom Experiments in generational garbage collection -------------------------------------------------------------------------------
24 May 1993 Phil Wadler A logical basis for deforestation - or - A new Curry-Howard isomorphism The Curry-Howard isomorphism establishes a close connection between a programming language (lambda calculus) and a system of logic (natural deduction). But intuitionistic logic can be formulated in several ways -- instead of natural deduction one might use sequent calculus. Is there a programming language that corresponds to sequent calculus, as lambda calculus corresponds to natural deduction? The answer is yes, and this provides a solution to a problem which has annoyed functional programmers for several years: how to generalise deforestation to a higher-order language. -------------------------------------------------------------------------------
Thursday 3 June 1993 Carl A. Gunter, AT&T Bell Laboratories / University of Pennsylvania A Proof-Theoretic Assessment of Runtime Type Errors I this talk I will analyze the way in which a result concerning the absence of runtime type errors can be expressed when the semantics of a language is described using proof rules in what is sometimes called a *natural* semantics. We argue that the usual way of expressing such results has conceptual short-comings when compared with similar results for other methods of describing semantics. These short-comings are addressed through a form of operational semantics based on proof rules for what we call a *partial proof* semantics. A partial proof semantics represents steps of evaluation using proofs with logic variables and subgoals. Such a semantics allows a proof-theoretic expression of the absence of runtime type errors that addresses the problems with such results for natural semantics. We demonstrate that there is a close correspondence between partial proof semantics and a form of structural operational semantics that uses a grammar to describe evaluation contexts and rules for the evaluation of redexes that may appear in such contexts. Indeed, partial proof semantics can be seen as an intermediary between such a description and one using natural semantics. Our study is based on a case treatment for a language called RAVL for Records And Variants Language, which has a polymorphic type system that supports flexible programming with records and variants. This is joint work with Didier Remy of INRIA, Rocquencourt. -------------------------------------------------------------------------------
John O'Donnell, jtod@dcs.glasgow.ac.uk