2nd International Summer School on Advances in Programming Languages

19th-22nd August, 2014
Heriot-Watt University, Edinburgh

Abstracts

  • Prof. Phil Wadler, University of Edinburgh

    Propositions as Sessions

    Session types encode communication protocols. Typically, session types are constructed from the following primitives:

    (a) write to a channel (a') read from a channel
    (b) offer a choice of actions     (b') select from a choice of actions
    (c) provide a service (c') request use of a service

    The two columns are dual: when one end of a channel performs one of the actions on the left, the other end must perform the corresponding action on the right, and vice versa. Recent work by Caires and Pfenning (2010) and Wadler (2012) reveals a correspondence between the session types of Honda (1993) and the linear logic of Girard (1987). Under one version of this correspondence, each pair of actions above correspond to dual operators of linear logic. The talk will explain the basics of session types and their correspondence to linear logic, and the tutorial will offer students a chance to try out a new implementation of session types built on top of the Links programming language.

    Paper: Propositions as Sessions


  • Dr. Jeremy Singer, University of Glasgow

    Mathematical Garbage: Formal Models of Memory Management

    Garbage collection is no longer an esoteric research interest. Mainstream programming languages like Java and C# rely on high-performance memory managed runtime systems. In this talk, I will motivate the need for rigorous models of memory management to enable more powerful analysis and optimisation techniques. I will draw on a diverse range of topics including thermodynamics, economics, machine learning and control theory.


  • Dr. Conor McBride, University of Strathclyde

    Dependent Type Systems

    Dependent type systems allow us to manage the validity of data relative to other data. Using the programming language Agda, I shall demonstrate techniques for working with invariants of size, ordering and balancing, establishing advanced safety properties with minimal proof effort. The examples will mainly come from the ICFP 2014 paper (below).

    Paper: How to Keep Your Neighbours in Order


  • Dr. Michele Weiland, Edinburgh Parallel Computing Centre (EPCC)

    Partitioned Global Address Space

    Partitioned Global Address Space (PGAS) programming models are increasingly establishing themselves as an alternative to the message passing and shared memory models that are prevalent in the world of parallel programming. We will introduce the core concept of PGAS and its different implementations, focussing in particular on Unified Parallel C.


  • Prof. Jeremy Gibbons, University of Oxford

    Folding Domain−Specific Languages: Deep and Shallow Embeddings

    A domain-specific language can be implemented by embedding within a general-purpose host language. This embedding may be deep or shallow, depending on whether terms in the language construct syntactic or semantic representations. The deep and shallow styles are closely related, and intimately connected to "folds" (inductive structural recursion).

    Paper: Functional Programming for Domain−Specific Languages

    Lab material

    Functional Pearl and Further Reading: Folding Domain−Specific Languages: Deep and Shallow Embeddings


  • Dr. Hans Vandierendonck, Queen's University Belfast

    Parallel Programming with the Swan Task Dataflow Scheduler

    In the task dataflow approach to parallel programming, programmers do not declare or enforce parallelism. Rather parallelism is inferred from annotations that declare the side-effects of tasks on their arguments. As such, parallelism is always "correct" provided that the annotations are correct. Arguably, side-effects of tasks are easier to verify than parallelism. We will present the design of the Swan task dataflow language and the implementation of its runtime system.

    Paper: A Unified Scheduler for Recursive and Task Dataflow Parallelism


  • Prof. Fritz Henglein, DIKU, University of Copenhagen

    Regular Expressions as Types

    A regular expression (RE) is usually interpreted as a decision problem (language): classifying strings into accepted and rejected ones. This is, however, inadequate for programming applications where extracting and transforming data, not just an accept/reject answer, is required.

    We show how to interpret an RE as a type instead. This naturally captures the parse trees of a string, not just its acceptance. We develop type, automata and algorithmic theory to, e.g., functionally specify regular expression parsing where Kleene-star yields the list of all matches in the input; perform asymptotically optimal time- and space-efficient parsing without any backtracking; synthesize code for converting parse trees under one RE to another RE. The lecture emphasizes 'thinking' about a regular expression as a type rather than a language and illustrates the semantic, expressive and algorithmic mileage for stream processing applications one can get out of that.

    Further reading: Kleene meets Church project website.

  • Prof. Rita Loogen, Philipps-Universität Marburg

    Skeleton-Based Parallel Programming in Eden

    Algorithmic skeletons are higher-order functions defining parallel computation schemes. Our parallel Haskell extension Eden provides a large skeleton library, ranging from simple parallel maps to sophisticated workpool or divide-and-conquer schemes. We will show that in the best of cases parallel programming in Eden reduces to the choice, appropriate instantiation and/or composition of skeletons.

    Paper: Eden - Parallel Functional Programming in Haskell

    Eden: Eden web page


  • Prof. Sven-Bodo Scholz, Heriot-Watt University

    The Powers of Array Programming

    Array programming enables programmers to manipulate their data in terms of n-dimensional address spaces. This does not only place a powerful set of advanced data manipulation operations at the fingertips of programmers it also enables compilers to produce highly efficient parallel codes that run on various multi-core architectures including GPUs without any further modification.

    We give a brief introduction into the functional array programming language SaC and we provide a glimpse of how array programming empowers programmers to quickly reorganise potentially large data in sophisticated ways within the context of different application domains. The lab gives opportunity to experience the power of array programming first hand.

    Paper: SAC — A Functional Array Language for Efficient Multi-threaded Execution

    SAC: Single Assignment C web page, SAC 1.0 Tutorial


  • Dr. Hans-Wolfgang Loidl, Heriot-Watt University

    Glasgow parallel Haskell

    Parallel programming in a functional language provides a number of advantages: ease of parallelisation (in a side effect free language), high level of abstraction, clean separation of computation and coordination, a natural way of encoding skeletons, and deterministic parallelism. In this presentation I will give an overview of parallel programming in the Glasgow parallel Haskell extension of the non-strict, pure functional programming language Haskell. I will focus on evaluation strategies as the means of structuring and composing parallel code.

    Paper: Seq no more: Better Strategies for Parallel Haskell

    GpH: Glasgow parallel Haskell web page

    Further reading: "Lazy Data-Oriented Evaluation Strategies", Prabhat Totoo, Hans-Wolfgang Loidl. In FHPC 2014.