In another common paradigm, a process consumes some data structures produced by another process. In a compiler, for example, an optimising phase might consume the parse-tree produced by the parser. The data structure can be thought of as a buffer that the producer fills and the consumer empties.
For simplicity, we will assume that the buffer is represented by a list, and consider just a bounded or n-place buffer. There are many other possible ways to express producer/consumer parallelism, for example, to improve granularity the producer could compute the next n-element "chunk" of the list rather than just a single value.
The dynamic behaviour of an n-place list buffer is as follows. Initially, the first n list elements are eagerly constructed, and then, whenever the head of the buffer-list is demanded, the nth element is sparked. In effect the producer speculatively assumes that the next n elements in the list will be used in the computation. This assumption introduces parallelism because, if there is a free processor, a thread can produce the nth element, while the consumer is consuming the first.
Applying the following parBuffer n s function to any list converts it into an n-place buffer that applies strategy s to each list element. Initially start sparks the first n elements and returns a (shared) list from the nth element onwards. The value of the return function is identity on it's first argument, and it's dynamic behaviour is to spark the nth element (the head of the start list) whenever the head of the list is demanded.
parBuffer :: Int -> Strategy a -> [a] -> [a] parBuffer n s xs = return xs (start n xs) where return (x:xs) (y:ys) = (x:return xs ys) `sparking` s y return xs  = xs start n  =  start 0 ys = ys start n (y:ys) = start (n-1) ys `sparking` s y