Workshop on Real World Domain Specific Languages

We're pleased to be hosting a one day workshop on real world Domain Specific Languages at Heriot-Watt University Edinburgh in room PG202 in the Post Graduate centre on Thursday 1st May, 2014. The Workshop is sponsored by the EPSRC Rathlin project and SICSA. Attendance is free, and refreshments and lunch will be provided. If you'd like to attend, please email Rob Stewart with a signal of this intention ASAP to: R.Stewart@hw.ac.uk.

Schedule

  • 10:30 Refreshments.

  • 11:00 Greg Michaelson (Heriot-Watt), How domain specific are DSLs? (slides)
    DSLs are supposed to capture abstractions appropriate for well characterised classes of problems. But if a DSL is Turing complete (TC), or embedded in a TC language, or its D maps to other Ds, then is it really DS? Perhaps DSness is a pragmatic rather than a semantic property?

  • 11:20 Peter Boyle (University of Edinburgh), BAGEL: a retargettable domain specific compiler for QCD.
    BAGEL is a retargettable domain specific compiler for QCD, and accompanies the BFM library of sparse matrix solvers for the Dirac equation. Primitive operations target integer, floating point, and complex vector instructions. Alpha, Sparc, Power and KnightsCorner instruction sets are supported. The compiler has been used as a codesign vehicle for the BlueGene/Q computer, and non-trivial kernels achieve up to 70% of peak performance. Overall QCD application performance as high as 35% of peak has been obtained, and weak scaling has been demonstrated up to 7.2Pflop/s and 1.6 million cores. The code was a Gordon Bell finalist at SC13.

  • 11:55 Jeremy Singer (Glasgow University), AspectJ for Runtime Behaviour Modification of Java Libraries. (slides)
    AspectJ is a tool for aspect-oriented programming in Java. Aspects allow programmers to inject behaviour into code based on syntactic pattern matching on constructs from the original source program. In this talk, I will review the features of AspectJ - particularly its flexibility (good) and its user-friendliness (bad). I will report on a new project at Glasgow about modifying the behaviour of the Java Collections framework to enable graceful degradation of application performance when system memory is exhausted.

  • 12:30 Lunch.

  • 13:00 Paul Cockshott (Glasgow University), the Intermediate Language for Code Generation (ILCG). (slides)
    ILCG is a machine description language in the same general field as ISP developed by Bell and Newell. It is similar in that it is used to describe the instructionset of a machine and allows the user to declare Physical resources like registers, stacks and memory banks. It is different in that it does not concern itself with the binary encoding of the instructions but with the assembly level encoding. It is also different in the use that it is put, unlike ISP and other register transfer languages its purpose is not to allow machine designers to run simulations of possible machines. Instead it is used for the automatic construction of compiler back ends. The talk will introduce users to the main features of the language and how to set about defining a machine in the language. It will address issues like how to optimise the use of addressing modes and instructions in the back end of a compiler, and how to automate the vectorisation of loops using ILCG.

  • 13:35 Alan Gray (University of Edinburgh), targetDP: An Abstraction of Lattice Based Parallelism with Portable Performance (slides)
    In order to achieve high performance on modern computational systems, it is vital to efficiently map algorithmic parallelism to that inherent to the hardware. From an application developer's perspective, it is also crucially important that maintainability and portability can be sustained across the range of different leading-edge hardware platforms. In this talk I will present targetDP, a lightweight programming layer that allows the abstraction of the data parallelism inherent to those applications that employ structured grids, such that the same source code can optimally target the thread level parallelism (TLP) and instruction level parallelism (ILP) of either traditional SIMD multi-core CPUs or GPU-accelerated platforms. TargetDP, a lightweight software layer implemented through C preprocessor macros and library functions, can be added to existing applications incrementally, and can be combined with higher-level parallelism paradigms such as MPI. I will present CPU and GPU performance results for a benchmark taken from the lattice Boltzmann application that motivated this work. These demonstrate not only performance portability, but also the clear optimisation resulting from the intelligent exposure of ILP.

  • 14:10 Allin Cottrell (Wake-Forest) and Riccardo "Jack" Lucchetti (Marche), Hansl: a DSL for econometrics. (slides)
    We give an account of hansl, the scripting language for the open-source econometrics program gretl (gretl.sf.net). Hansl has a good deal in common with matrix-oriented languages such as Matlab but also presents features specific to the econometric domain, in particular the object known as a dataset. We make the point that a "domain" can be characterized not only by the nature of the problems it comprises but also by a social aspect, namely the skill-set and expectations of the typical user of software in the domain. Some of hansl's features must be understood in this context. Since econometric computations can be quite demanding (e.g. Monte Carlo analysis of the properties of statistical estimators) one focus of current work in gretl development is on parallelization. We discuss our strategies for making parallelization (at various levels) as transparent as possible to the user of hansl.

  • 14:45 break.

  • 15:15 Rob Stewart (Heriot-Watt) and Deepayan Bhowmik (Heriot-Watt), Rathlin Image Processing Language (RIPL). (slides)
    The analysis of human activity and behaviour from video analytics has witnessed a major growth in applications including surveillance, vehicle autonomy, marketing, entertainment, intelligent domiciles and medical diagnosis. In many application domains, there is a need for more intelligent image acquisition and real-time processing at the camera source. FPGAs are a good fit for remote image processing, though current FPGA programming models such as HDLs are suited to hardware designers are unfamiliar to software developers. New programming tools are needed if FPGAs are to fulfil their potential in mainstream computing.

    We are designing and implementing IPPro, a novel FPGA-based processor for image processing, with a supporting programming environment for a domain specific language (DSL) called RIPL (Rathlin Image Processing Language). The design of RIPL is inspired by existing computer vision libraries and DSLs, and image algebra constructs. The DSL comes in two flavours, one designed as a small imperative language familiar to many programmers, and the other to express image algebra notation directly. DSLs are often critised for lacking tool support. Responding to this charge, provisional RIPL tool support will be demonstrated. An in-progress RIPL interpreter will be presented, and the design for the RIPL to IPPro compiler will be described.

  • 15:50 Patrick Maier (Glasgow University), The Design of the AJITPar Parallel Coordination Language. (slides)
    A key element of the multicore software crisis is a lack of abstraction: most parallel code mixes coordination and computation, and assumptions about the architecture are often hard-coded, tying the code to a specific architecture. For problems with regular parallelism, portable parallel performance may still be achieved by static compilation techniques that specialise the code for the given architecture. However, the more common case of irregular parallelism can't be tackled statically; instead dynamic specialisation and dynamic scheduling are required. The AJITPar (Adaptive Just-In-Time Parallelisation) project aims to achieve portable parallel performance by combining dynamic trace-based just-in-time compilation of a high-level parallel functional language with dynamic demand-driven scheduling of parallelism. This will involve estimating the granularity of parallel tasks by online profiling and static analysis, and (in later project stages) adapting granularity by online code transformations. The starting point of AJITPar is lambdachine, a recently developed sequential Just-In-Time (JIT) compiler for Haskell. To introduce parallelism, we design a low-level domain-specific language (DSL) for task-parallel computations. Specifically, this DSL should deal with task creation, communication between and synchronisation of tasks, and serialisation of data (including tasks).

    The design goals for this DSL are as follows:
    1. It should be expressive enough to enable building higher-level abstractions, like algorithmic skeletons.
    2. It should be flexible enough to express a range of benchmarks, from regular matrix bashing to highly irregular symbolic computations.
    3. It should support an equational theory of program transformations, to support online transformation in later stages of AJITPar.
    4. Finally, it should be cheap to implement on top of the single-threaded lambdachine runtime system.

  • 16:30 end.