Coordination languages build parallel programs from two components: the computation model and the coordination model [Gelernter and Carriero, 1992]. Like evaluation strategies, programs have both an algorithmic and a behavioural aspect. It is not necessary for the two computation models to be the same paradigm, and in fact the computation model is often imperative, while the coordination language may be more declarative in nature. Programs developed in this style have a two-tier structure, with sequential processes being written in the computation language, and composed in the coordination language.
The best known coordination languages are PCN [Foster and Taylor, 1994] and Linda [Gelernter and Carriero, 1992], both of which adopt a much more explicit approach than evaluation strategies. Since both languages support fully general programming structures and unrestricted communication, it is, of course, possible to introduce deadlock with either of these systems, unlike evaluation strategies. PCN composes tasks by connecting pairs of communication ports, using three primitive composition operators: sequential composition, parallel composition and choice composition. It is possible to construct more sophisticated parallel structures such as divide-and-conquer, and these can be combined into libraries of reusable templates.
Linda is built on a logically shared-memory structure. Objects (or tuples) are held in a shared area: the Linda tuple space. Linda processes manipulate these objects, passing values to the sequential computation language. In the most common Linda binding, C-Linda, this is C. Sequential evaluation is therefore performed using normal C functions.
Darlington et al. integrate the coordination language approach with the skeleton approach, providing a system for composing skeletons, SCL [Darlington et al., 1995]. SCL is basically a data-parallel language, with distributed arrays used to capture not only the initial data distribution, but also subsequent dynamic redistributions.
SCL introduces three kinds of skeleton: configuration, elementary and computational skeletons. Configuration skeletons specify data distribution characteristics, elementary skeletons capture the basic data parallel operations as the familiar higher-order functions map, fold, scan etc. Finally, computational skeletons add control parallel structures such as farms, SPMD and iteration. It is possible to write higher-order operations to transform configurations as well as manipulate computational structures etc. An example taken from Darlington et al., but rewritten in Haskell-style, is the partition function, which partitions a (sequential) array into a parallel array of p sequential subarrays.
partition :: Partition_pattern -> Array Index a -> ParArray Index (Array Index a) partition (Row_block p) a = mkParArray [ ii := b ii | ii <- [1..p] ] where b l = array bounds [ (i,j) := a ! (i+(ii-1)*l/p, j) | i <- [1..l/p], j <- [1..m] ] bounds = ((1,l/p), (1,m))
A similar integration is provided by the P3L language [Danelutto et al., 1991], which provides a set of skeletons for common classes of algorithm.
Crowl and Leblanc [Crowl and Leblanc, 1994] have developed an approach with similarities with evaluation strategies. The approach is based on explicitly parallel imperative programs (including explicit synchronisation and communication, as well as explicit task creation).
Like evaluation strategies, the control abstraction approach also separates parallel control from the algorithm. Each control abstraction comprises three parts: a prototype specifying the types and names of the parameters to the abstraction; a set of control dependencies that must be satisfied by all legal implementations of the control abstraction; and one or more implementations.
Each implementation is effectively a higher-order function, parameterised on one or more closures representing units of work that could be performed in parallel. These closures are invoked explicitly within the control abstraction. Implementations can use normal language primitives or other control abstractions.
In our purely functional context, Crowl and Leblanc's control dependencies correspond precisely to the evaluation degree of a strategy. Their requirement that implementations conform to the stated control dependencies is thus equivalent in our setting to requiring that strictness is preserved in any source-to-source transformation involving an evaluation strategy. This is, of course, a standard requirement for any transformation in a non-strict functional language.
Compared with the work described here, control abstractions take a more control-oriented approach, relying on a meta-language to capture the essential notions of closure and control dependency that are directly encoded in our GpH-based system. In this system, we also avoid the complications caused by explicit encoding of synchronisation and communication, though perhaps at some cost in efficiency.
Crowl and Leblanc have applied the technique in a prototype parallelising compiler. They report good performance results compared with hand-coded parallel C, though certain optimisations must be applied by hand. This encourages us to believe that evaluation strategies could also be applied to imperative parallel programs.
Finally, there is a clear relationship between control abstraction and skeleton-based approaches. In fact, control abstractions could be seen as an efficient implementation technique for algorithmic skeletons.