Glasgow Parallel Haskell
About GpH
Using GpH
Mailing List
Standard Refs
All Papers
with Abstracts
Par Haskell Portal

Language Constructs

GpH is a modest conservative extension of Haskell realising thread-based semi-explicit parallelism. It provides a single primitive to create a thread, but thereafter threads are automatically managed by a sophisticated runtime system. Evaluation strategies combine simple thread primitives with higher-order functions to provide high-level coordination abstractions.


GpH provides parallel par and sequential pseq composition as coordination primitives. Denotationally, both compositions are projections onto the second argument. Operationally pseq causes the first argument to be evaluated (to weak head normal form) before the second and par indicates that the first argument may be executed in parallel.

  par :: a -> b -> b    -- parallel composition
  pseq :: a -> b -> b   -- sequential composition

On the most basic level, these two primitives are all that's needed to parallelise code. Here is the simplest possible example, Fibonacci numbers:

  nfib :: Int -> Int
  nfib n | n <= 1    = 1
         | otherwise = x `par` (y `pseq` x + y)
                       where x = nfib (n-1)
                             y = nfib (n-2)

The annotation overhead is moderate compared to the sequential version: just one par and one pseq added, the remaining code stays the same.

Eval Monad

Beyond par and pseq, there is a more sophisticated device for controlling the evaluation order: the Eval monad. Actually, this is no more than a strict identity monad; strictness of >>= is what will be used to force evaluation.

  data Eval a = Done { runEval :: a }

  instance Monad Eval where
    return x = Done x
    Done x >>= k = k x

The Eval monad provides a way to lift the par and pseq primitives to monadic identity functions rpar and rseq respectively.

  rseq, rpar :: a -> Eval a
  rseq x = x `pseq` return x
  rpar x = x `par` return x

Rewriting the Fibonacci example using the Eval monad exposes the parallelism more clearly.

  nfib :: Int -> Int
  nfib n | n <= 1    = 1
         | otherwise = runEval $ do
                         x <- rpar (nfib (n-1))
                         y <- rseq (nfib (n-2))
                         return (x + y)


The idea that parallel coordination should be expressed by monadic identity functions like rpar and rseq can be generalised, giving rise to evaluation strategies: functions of type a -> Eval a executed purely for achieving coordination effects.

  type Strategy a = a -> Eval a

Strategies provide a clean separation between coordination and computation via the using construct, which applies a strategy to an expression; see the definition of parMap below for an example. The driving philosophy is that it should be possible to understand the computation specified by an expression without considering its coordination (i.e. by ignoring all strategy applications).

  using :: a -> Strategy a -> a
  e `using` strat = runEval (strat e)

Combining Strategies

Because strategies are simply functions they can be passed as parameters, or combined using standard language constructs. Composition of strategies can be expressed using the dot; the operators dot and using behave like function composition and function application, respectively.

  dot :: Strategy a -> Strategy a -> Strategy a
  strat2 `dot` strat1 = strat2 . runEval . strat2

A simple higher-order strategy is evalList, which applies its strategy argument to all elements of a list, in sequence.

  evalList :: Strategy a -> Strategy [a]
  evalList strat []     = return []
  evalList strat (x:xs) = do x' <- strat x
                             xs' <- evalList strat xs
                             return (x':xs')

Finally, the higher-order strategy parList, which applies its argument to all elements of a list in parallel, is derived from evalList by composing the strategy argument with rpar.

  parList :: Strategy a -> Strategy [a]
  parList strat = evalList (rpar `dot` strat)


Skeletons are parametrized pieces of code that coordinate the parallel computation of their parameters, i.e. skeletons are simply higher-order functions implementing patterns of parallelism, thereby separating algorithm from strategy. Some skeletons can be conveniently implemented by combining the algorithm with an appropriate strategy for the underlying data structure. For instance, a parallel map skeleton parMap can be written using parList, resulting in code that cleanly separates the algorithm map f xs from the strategy parList strat.

  parMap :: Strategy b -> (a -> b) -> [a] -> [b]
  parMap strat f xs = map f xs `using` parList strat