[theory meets practice]

Glasgow Functional Programming
Seminar Schedule

The next talk
Schedule for 1995-96
Titles and Abstracts
Related seminar series

Everyone is welcome to the Glasgow Functional Programming Seminars. Each talk (including questions) is limited to one hour, but may be shorter. Please contact John O'Donnell for more information, to volunteer a talk or to suggest a speaker. You can read the titles, authors and abstracts for all the past seminars held by the Glasgow Functional Programming Group during 1994-95, 1993-94 and 1992-93.


Schedule for 1995-96

Past talks

Sep 20  James Harland        The Linear Logic language Lygon
Oct  6  Joachim Niehren      Functions, objects, and constraints in Oz
Oct  9  Barry Jay            Covariant types
Oct 16  Andy Gill            A cheaper form of deforestation
Oct 23  Phil Wadler          Compiling with continuations and monads
Oct 30  Simon Marlow         A type system for Erlang
Nov  6  Satnam Singh         The Glasgow Ruby compiler
Nov 13  Chris Reade          Functional GUI toolkit
Nov 20  David King           Functional programming and graph algorithms
Nov 27  Murray Cole          Parallel programming with algorithmic skeletons
Dec  4  Staff meeting
Dec 11  Simon Peyton Jones   Concurrent Haskell
Jan  8  Phil Trinder         Strategies for writing Parallel Haskell Programs
Jan 15  John O'Donnell       The FPLE'95 conference
Jan 22  Patrick Sansom       Time and Space Profiling for Functional Languages
Jan 29  Simon B. Jones       Experiences with Clean I/O
Feb  5  Three Glaswegians    The POPL'96 Conference
Feb 12  David N. Turner      Compiling Pi Calculus to C
Feb 19  Holiday
Feb 22  Joe Wells            Type inference in System F
Feb 26  Tony Davie           Tutorial: Axiom and its type system
Mar  4  no talk
Mar 11  Greg Michaelson      Parallelising fold without associativity
Apr 15
Apr 22  Hans Loidl           Granularity in parallel functional programming
Apr 29  Kevin Hammond

Future talks

May  6  Holiday
May 13
May 20
May 27  Holiday
Jun  3
Jun 10
Jun 17


Titles and Abstracts

Wednesday, September 20, 1995
4pm, Conference Room

An Overview of the Linear Logic Programming Language Lygon
James Harland
Royal Melbourne Institute of Technology

Recently there has been significant interest in the application of linear logic, a logic designed with bounded resources in mind, to computer science tasks, and in particular to the design of logic programming languages. Such languages provide a notion of resource-oriented programming, which leads to programs which are more concise and elegant than their equivalents in languages based on classical logic, such as Prolog. In this talk we give an overview of the linear logic programming language Lygon. In order to provide the programmer with maximum flexibility, Lygon has been designed with as few restrictions as possible in mind whilst still remaining a logic programming language. As a result, Lygon appears to contain the broadest class of features of any linear logic programming language. We describe the design, implementation and applications of the language, including several example programs. We argue that this shows that the paradigm of resource-oriented programming is natural and effective for a variety of applications.

James Harland
Department of Computer Science
Royal Melbourne Institute of Technology
Melbourne, Australia
jah@cs.rmit.edu.au


Friday, October 6
4pm, Conference Room

Functions, objects and constraints in Oz
Joachim Niehren
Universität des Saarlandes

Oz is a concurrent constraint language develloped at DFKI in Saarbr"ucken during the last 4 years. Oz integrates functions, stateful concurrent objects, and encapsulated search. The usage of constraints is twofold:

  1. They provide for communication over shared data structures which makes lazyness expressible in a concurrent setting.
  2. They allow to prune search trees when solving combinatorical problems.
In the talk I will introduce the Oz language and some of its foundations.

Joachim Niehren
Universität des Saarlandes
DFKI
Saarbrücken
Germany


Monday, October 9
4pm, Conference Room

Covariant Types
Barry Jay
University of Technology, Sydney

The covariant type system is rich enough to represent polymorphism on inductive types, such as lists and trees, and yet is simple enough to have a set-theoretic semantics. Its chief novelty is to replace function types by transformation types, which denote parametric functions. Their free type variables are all in positive positions, and so can be modelled by covariant functors. The semantics is expressed in terms of data functors and arbitrary strong natural transformations between them. Each such transformation is given by a uniform algorithm, creating a tight link with the parametric polymorphism of the type system.

Barry Jay
University of Technology
Sydney, Australia


Monday, October 16, 1995
4pm, Conference Room

A Cheaper Form of Deforestation
Andy Gill
University of Glasgow

In functional languages intermediate data structures are often used as ``glue'' to connect separate parts of a program together. Deforestation is the process of automatically removing intermediate data structures. In this talk we present and analyse a new approach to deforestation. This new approach is both practical and general.

We concentrate on the problem of list removal rather than the more general problem of arbitrary data structure removal. This more limited scope allows a complete evaluation of the pragmatic aspects of using our deforestation technology.

We have an implementation in the Glasgow Haskell compiler, and this implementation has allowed practical feedback. One important conclusion is that a new analysis is required to infer function arities and the linearity of lambda abstractions. This analysis renders the basic deforestation algorithm far more effective.

In our talk we give a detailed assessment of our implementation of deforestation. We measure the effectiveness of our deforestation on a suite of real application programs. We also observe the costs of our deforestation algorithm.


Monday, October 23, 1995
4pm, Conference Room

The Essence of Compiling with Continuations and Monads
Phil Wadler
University of Glasgow

Amr Sabry (Chalmers) and Philip Wadler (Glasgow)

Every functional programmer knows that CPS stands for continuation- passing style. But why is CPS so popular? One reason is that translating a call-by-value program into CPS makes it easier to reason about: fewer equalities are provable between source terms than between their CPS equivalents.

Sabry and Felleisen augmented the call-by-value calculus with additional rules, and showed that the augmented calculus proves as many equalities as its CPS equivalent. Flanagan, Sabry, Duba, and Felleisen went on to relate these ideas to simplified compilers and abstract machines. Similar notions have been explored by Danvy and his colleagues. And eerily related notations appear in the Spineless Tagless G-machine.

Here we give a presentation that is simpler than the previous work, and establishes stronger results. It is based on two calculi of Moggi: the well-known monadic meta-language, lambda-m; and the less-known corresponding call-by-value calculus, lambda-c.

See and understand the deep structure of CPS! Marvel at the meaning of Plotkin's administrative reductions! Comprehend the close connection between call-by-value and call-by-need! Be astounded as adjoints are applied to compiling technology! A grand time will be had by all.


Monday, October 30, 1995
4pm, Conference Room

A Type System for Erlang
Simon Marlow
University of Glasgow

Erlang is a strict, first order, dynamically typed functional programming language with support for concurrency and distributed programming. The goal of this project is to design and implement a static type system for Erlang.

The style of existing Erlang programs necessitates that the type system include some form of subtyping, and that it be able to derive recursive type definitions without any programmer supplied declarations.

In this talk I will describe the current state of our type system design, which is based on a recursive constraint solving system. Typing rules generate typing constraints of the form 'T <= U', and the solver determines whether the system as a whole is consistent.

I will also describe how various transformations can be applied to a set of typing constraints, in order to simplify types for presentation to the user.


Monday, November 6, 1995
4pm, Conference Room

A Haskell Application: The Glasgow Ruby Compiler
Satnam Singh
University of Glasgow

The Glasgow Ruby compiler has been written in Haskell. We describe the architecture of the system and document our experience of developing a medium scale system using a functional language. We will try and highlight the pros and cons of using the Glasgow Haskell compiler for this project.


Monday, November 13, 1995
4pm, Conference Room

Yet Another Functional Graphical User Interface Toolkit Chris Reade
Brunel University

This talk will be about a functional graphical user interface toolkit being developed in ML (to run on Windows 95). We discuss some of the practical constraints and goals for our functional GUI tools. Although this is mostly developmental work using recent ideas from the functional programming community, we will pick out some issues relevant to functional programming research.

(Work done with Panos Argiris at Abstract Hardware Ltd.)


Monday, November 20, 1995
4pm, Conference Room

Functional Programming and Graph Algorithms
David King
University of Glasgow

It is no coincidence that the title of this talk is also the title of my thesis. Ergo you're in for a lightning and hopefully enlightening tour through some of the principal aspects of my thesis work.

It is a universally acknowledged truth that a computing scientist with difficult problems to solve will be looking for a programming language that makes his or her life easier. Functional languages make a computing scientist's life easier for many problem domains. Graph algorithms, however, are a problem domain that is notoriously difficult for functional languages (especially pure ones). Then along came monads: the programming technique that reaches problem domains that other programming techniques have never reached.

My thesis claim, and the claim of this talk is that graph algorithms can be implemented with no loss of asymptotic complexity in lazy functional languages; and that the abstraction powers of these languages allows the algorithms to be expressed in such a way that their structure is more apparent than is commonly the case.

To find out more, please come along.


Monday, November 27, 1995
4pm, Conference Room

Parallel Programming with Algorithmic Skeletons:
Two Current Developments
Murray Cole
University of Edinburgh

The `skeletal' approach to parallel programming abstracts commonly occurring patterns of control and interaction into second order programming constructs. These may be instantiated, combined and nested to produce structured parallel programs. I will discuss two aspects of this approach which are of current interest. The first concerns the design of a cost modelling framework which is universal with respect to both constructs and underlying architectures. The second outlines attempts (in the context of the Bird-Meertens formalism, which can be viewed as a skeletal framework) to devise a methodology for producing one implicitly parallel program from two implicitly sequential ones. Since both aspects of the talk describe `work in progress', audience participation is particularly welcomed.


Monday, December 4, 1995
4pm, Conference Room

Staff meeting - no FP Group meeting


Monday, December 11, 1995
4pm, Conference Room

Concurrent Haskell
Simon Peyton Jones
University of Glasgow

Until recently, functional programmers interested in parallelism have focussed entirely on *implicit* concurrency. However, some applications are most easily expressed in a programming language that supports *explicit* concurrency, notably interactive and distributed systems.

In this talk I'll describe an extension to Haskell that allows it to express explicitly concurrent applications; we call the resulting language Concurrent Haskell. The resulting system appears to be both expressive aStrategies for writing Parallel Haskell Programsnd efficient, and I'll give a number of examples of useful abstractions that can be built from Concurrent Haskell's primitives.

Concurrent Haskell is distributed with the Glasgow Haskell compiler, and is the substrate on Sigbjorn's graphical user interface toolkit, Haggis, is built.

The interesting aspect is, of course, how to integrate the semantic purity of a functional language with the non-determinism that concurrency necessarily introduces. Do we have to throw the baby out with the bathwater? Come along and see.

I hope this talk will be of some interest to non-functional programmers too, e.g. Java hackers. You can get a sneak preview from the Concurrent Haskell paper (POPL 96) attached to my Web page.


Monday, January 8, 1996
4pm, Conference Room

Strategies for writing Parallel Haskell Programs
Phil Trinder
University of Glasgow

THE PLUG: Available on your desk now!

The latest version of the Glasgow Haskell Compiler supports, inter alia, parallel evaluation. Glasgow Parallel Haskell is easily ported because it is based on PVM, a widely-available communications package. It is currently available on a Sun SparcServer shared-memory multi-processor and on networks of Suns and Alphas. This talk derives from experience gained writing parallel Haskell programs.

THE TALK:

In most current parallel non-strict functional languages, a function must both describe the value to be computed, and the *dynamic behaviour*. The dynamic behaviour of a function has two aspects: Parallelism: what values could be computed in parallel, and Evaluation-degree: how much of each value should be constructed. Our experience is that specifying the dynamic behaviour of a function obscures its semantics. Strategies have been developed to address this problem; a strategy being an explicit description of a function's potential dynamic behaviour. The philosophy is that *it should be possible to understand the semantics of a function without considering it's dynamic behaviour*.

In essence a strategy is a runtime function that traverses a data structure specifying how much of it should be evaluated, and possibly sparking threads to perform the construction in parallel. Because strategies are functions they can be defined on any type and can be combined to specify sophisticated dynamic behaviour. For example a strategy can control thread granularity or specify evaluation over an infinite structure. A disadvantage is that a strategy requires an additional runtime pass over parts of the data structure. Example programs are given that use strategies on divide-and-conquer, pipeline and data-parallel applications. We are encouraging the use of strategies to parallelise the Lolita Natural Language Processor (approx. 30k lines of Haskell).
Monday, January 15, 1996
4pm, Conference Room

The FPLE'95 Conference
John O'Donnell
University of Glasgow

A report on the first Functional Programming Languages in Education conference, held 4-6 December 1995 in Nijmegen, Holland.


Monday, January 22, 1996
4pm, Conference Room

Time and Space Profiling for Functional Languages
Patrick Sansom
University of Glasgow

I present the first profiler for a compiled, non-strict, higher-order, purely functional language capable of measuring time as well as space usage. The profiler is implemented in the Glasgow Haskell Compiler, has low overheads, and can successfully profile large applications.

A unique feature of my approach is that I give a formal specification of the attribution of execution costs to cost centres. This specification enables me to discuss various design decisions in a precise framework, prove certain properties about the attribution of costs and examine the effects of different program transformations on the attribution of costs.


Monday, January 29, 1996
4pm, Conference Room

Experiences with Clean I/O
Simon B. Jones
University of Stirling

The Clean system is a powerful functional programming tool. It contains experiments in a number of different areas of functional language design. In particular, it has a novel proposal for the organization of input and output, and contains impressive libraries of facilities for programming graphical user interfaces.

Clean I/O is based on collections of operations that act to cause side effects on multiple explicit abstract values representing physical I/O entities, such as files and graphical interfaces. A system of unique types is used to ensure that these values are individually single threaded through the program; and the side effecting I/O operations are therefore well controlled. This approach is distinct from monadic I/O schemes, which are based on a single, implicit environment, and guarantee that this is single threaded. The first part of the talk will be an introduction to the Clean I/O scheme.

The functionality provided by Clean and its I/O libraries allows monadic I/O to be implemented. This goes some way towards simplifying the `spaghetti code' that can result from direct use of the Clean I/O primitives. The programmer thus has a choice between basic, monadic (and hybrid) styles of programming. I will present an implementation of a basic I/O monad library in Clean.

In itself, the fact that the monadic approach can be implemented in Clean is unsurprising. However, implementation of the monad exposed some interesting and subtle technical points concerning the use of Clean. I will discuss these, and the solutions adopted.


Monday, February 5, 1996
4pm, Conference Room

The POPL'96 Conference
Phil Wadler, Simon Peyton Jones and David Turner
University of Glasgow


Monday, February 12, 1996
4pm, Conference Room

Compiling Pi Calculus to C
David N. Turner
University of Glasgow

The primary motivation of the Pict project was to design and implement a high-level concurrent language purely in terms of pi-calculus primitives. There have been many proposals for concurrent languages which include communication primitives which are very similar to those of the pi-calculus. However, to our knowledge, none have proposed using such primitives as the sole mechanism of computation.

Pict's design only makes sense if the basic operations of the pi-calculus (process creation, channel creation and communication over channels) can be implemented reasonably efficiently. I will describe a simple, but efficient, compilation of pi-calculus to C (which now forms the basis of the Pict programming language implementation).


Monday, February 19, 1996
4pm, Conference Room

University Holiday


Thursday, February 22, 1996
4pm, F171

Type inference in System F and extensions and restrictions
Joe Wells
Boston University

Girard and Reynolds independently invented the second-order, polymorphically-typed lambda calculus, known as System F, to handle problems in logic and computer programming language design, respectively. Viewing a type system in the "Curry style", which associates types with untyped lambda terms, raises the questions of "typability" and "type checking". Typability asks for a term whether there exists some type it can have. Type checking asks whether a term can have a particular type. I have shown that typability and type checking are both undecidable for System F. The first step is to reduce the undecidable problem of "semi-unification" to type checking. Then, type checking is reduced to typability. The method of the latter reduction constructs contexts (lambda terms with one hole) which can simulate arbitrarily chosen sets of type assumptions.

Since typability and type checking are undecidable for System F, it is natural to ask whether these problems are decidable for restrictions and extensions of System F. One area for restrictions is in the type language. Leivant's "rank" hierarchy stratifies any type system by restricting where the type constructors other than the arrow may appear. The same methods used for System F prove that type checking and typability are undecidable for System F restricted to rank k types where k >= 3. These problems are both decidable for the rank 2 restriction of System F, which is only slightly more powerful than the core of ML.

Other research has examined extensions of System F. One extension is to add the "eta" rule, which says that if M eta-reduces to N and M can have type t, then N can have type t also. This system, F+eta, is equivalent to adding a notion of subtyping devised by Mitchell. Quite recently, Tiuryn and Urzyczyn proved the undecidability of Mitchell's subtyping relation. I prove the same result in a simpler way, using a reduction from semi-unification. There is a well-known, trivial reduction from Mitchell's subtyping to type checking for F+eta. Then, I reduce (an undecidable fragment of) type checking for F+eta to typability for F+eta. The top-level structure is similar to that of the proof for System F, but the fine details differ greatly.

The methods used in these proofs can also show differences in the relative typing power of restrictions and extensions of System F. For example, they give an easy proof that the following lambda term is strongly normalizable but not typable in System F: lambda v. let y = (lambda x.Kx(x(xv)v)) in let z = (lambda w.ww) in v(yy)(yz)

Similar results are that the rank k+1 restriction of System F types more terms than rank k and that F+eta types more terms than System F. In general, any variation of System F that differs by restricting the type language can not type as many terms.


Monday, February 26, 1996
4pm, Conference Room

A Tutorial Introduction to Axiom and its Type System
Tony Davie
University of St. Andrews

The talk will introduce Axiom through examples aimed at explaining interesting points about the language's type system. I will talk about Axiom's Domains and Categories and make some comparisons with Haskell. As well as simple introductory examples I hope to be able to show the definition of a moderately complicated abstract type - list.
Monday, March 4, 1996
4pm, Conference Room

Open date


Monday, March 11, 1996
4pm, Conference Room

Parallelising fold in the absence of associativity
- or - 101 things to do with a functional edge tracker
Greg Michaelson
Herriot-Watt University


Monday, April 15, 1996
4pm, Conference Room

Open date


Monday, April 22, 1996
4pm, Conference Room

Granularity in parallel functional programming
Hans Loidl
University of Glasgow

To achieve a high efficiency in parallel functional programs it is important to generate threads that are not too small, as such threads would impose a high bookkeeping overhead. Previous studies of program granularity (i.e. of the thread sizes) indicated that rather simple runtime-system methods can improve the performance if some granularity information (relative granularities) is provided. This talk will mainly focus on generating this information for a strict functional language via a static analysis. Our main goal is a rather cheap analysis that can extract some useful information for the runtime-system. For example, in order to handle user defined recursion efficiently we plan to supply a library of recurrence equations together with their closed forms.
Monday, April 29, 1996
4pm, Conference Room


Kevin Hammond
University of St Andrews


Monday, May 6, 1996
4pm, Conference Room

Public Holiday (May Day)


* Monday, May 13, 1996
4pm, Conference Room

Open date


Monday, May 20, 1996
4pm, Conference Room

Open date


Monday, May 27, 1996
4pm, Conference Room

Public Holiday (Queen's birthday)


Monday, June 3, 1996
4pm, Conference Room

Open date


Monday, June 10, 1996
4pm, Conference Room

Open date


Monday, June 17, 1996
4pm, Conference Room

Open date


Related seminar schedules

Here are pointers to the schedules for some other relevant seminar series. Your page too could be listed here!
Number of recent accesses : [counter]
John O'Donnell, jtod@dcs.glasgow.ac.uk, +44 (141) 330-5458