12:00 - 18:00

Cairn Auditorium, PG G.01, Postgraduate Centre (detailed directions below)

Heriot-Watt University

Any dietary requirements email Rob Stewart (R.Stewart@hw.ac.uk) by Friday 11 May.

In this talk, I will report on several experiments around
large scale enumerations in enumerative and algebraic
combinatorics.

I'll describe a methodology used to achieve large speedups in several enumeration problems. Indeed, in many combinatorial structures (permutations, partitions, monomials, young tableaux), the data can be encoded as a small sequence of small integers that can often efficiently be handled by a creative use of processor vector instructions. Through the challenging example of numerical monoids, I will then report on how Cilkplus allows for a extremely fast parallelization of the enumeration. Indeed, we have been able to enumerate sets with more that 10^15 elements on a single multicore machine.

I'll describe a methodology used to achieve large speedups in several enumeration problems. Indeed, in many combinatorial structures (permutations, partitions, monomials, young tableaux), the data can be encoded as a small sequence of small integers that can often efficiently be handled by a creative use of processor vector instructions. Through the challenging example of numerical monoids, I will then report on how Cilkplus allows for a extremely fast parallelization of the enumeration. Indeed, we have been able to enumerate sets with more that 10^15 elements on a single multicore machine.

Nowadays, the computing landscape is becoming increasingly
heterogeneous and this trend is currently showing no signs of
turning around. In particular, hardware becomes more and more
specialized and exhibits different forms of parallelism. For
performance-critical codes it is indispensable to address
hardware-specific peculiarities. Because of the halting
problem, however, it is unrealistic to assume that a program
implemented in a general-purpose programming language can be
fully automatically compiled to such specialized hardware
while still delivering peak performance.

In this talk, we present AnyDSL. This framework allows to embed a so-called domain-specific language (DSL) into a host language. On the one hand, a DSL offers the application developer a convenient interface; on the other hand, a DSL can perform domain-specific optimizations and effectively map DSL constructs to various architectures. In order to implement a DSL, one usually has to write or modify a compiler. With AnyDSL though, the DSL constructs are directly implemented in the host language while a partial evaluator removes any abstractions that are required in the implementation of the DSL.

In this talk, we present AnyDSL. This framework allows to embed a so-called domain-specific language (DSL) into a host language. On the one hand, a DSL offers the application developer a convenient interface; on the other hand, a DSL can perform domain-specific optimizations and effectively map DSL constructs to various architectures. In order to implement a DSL, one usually has to write or modify a compiler. With AnyDSL though, the DSL constructs are directly implemented in the host language while a partial evaluator removes any abstractions that are required in the implementation of the DSL.

Process calculi based in logic, such as πDILL and CP, provide a
foundation for deadlock-free concurrent programming. However, in
previous work, there has been a mismatch between the rules for
constructing proofsand the term constructors of the
π-calculus. We introduce the Hypersequent Classical Processes
(HCP),which addresses this mismatch by using hypersequents
(collections of sequents) to register parallelism inthe typing
judgements. We prove that HCP enjoys deadlock-freedom and a
series of properties that relate it back to CP.

I will talk about Quantitative Type Theory, a Type Theory that
records usage information for each variable, refining a system by
McBride. The additional usage information means that it is possible to
assign a computational meaning to each term of the theory that is
appropriately "resource sensitive". Practically, this means that we
can compile type theory to environments where resources are
scarce. Formally, it is a realisability semantics over a variant of
Linear Combinatory Algebras. The semantics is defined in terms of
Quantitative Categories with Families, a novel extension of Categories
with Families for modelling resource sensitive type theories.

First-order resolution has been used for type inference for
many years, including in Hindley- Milner type inference,
type-classes, and constrained data types. Dependent types are
a new trend in functional languages. In this paper, we show
that proof-relevant first-order resolution can play an
important role in automating type inference and term synthesis
for dependently typed languages. We propose a calculus that
translates type inference and term synthesis problems in a
dependently typed language to a logic program and a goal in
the proof-relevant first-order Horn clause logic. The computed
answer substitution and proof term then provide a solution to
the given type inference and term synthesis problem.

Aspect oriented programming is an interesting paradigm which
allows a separation of cross-cutting concerns from a problem
domain. Standard examples of cross-cutting concerns are things
like logging or debugging code, which can be separated out
from core programming logic. We produce an aspect orientation
library — “ASP” — for Python 2, and demonstrate the mechanisms
which help it to work — and then demonstrate that variance can
be treated as a cross-cutting concern, where alterations to
processes can be applied using aspects. We demonstrate that,
using ASP’s mechanisms, process fuzzing libraries can be built
which apply random process mutation as a cross-cutting
concern. We show that this can be used for accurate modelling
of socio-technical behaviour, and then demonstrate some more
advanced process fuzzing aspects which can be used for the
implementation of things like genetic programming.

Russell's paradox is a famous inconsistency in naive set
theory, that there is a set R such that R is a member of
itself if and only if it is not a member of itself. Three
solutions to this problem are: ZF set theory, higher-order
logic, and Quine's NF.

I will motivate and describe Quine's NF, which is a simple system to define and which depends on a beautiful and mysterious "stratification condition". I will give an accessible account of this stratification condition, embed it into higher-order logic to help make sense of it, and then approach NF from the point of view of term-rewriting to note that it has some nice properties.

The interested reader can read a bit more in a paper just published in LMCS:

http://www.gabbay.org.uk/papers.html#lanssc

https://arxiv.org/abs/1705.07767

The language of Stratified Sets is confluent and strongly normalising.

I will motivate and describe Quine's NF, which is a simple system to define and which depends on a beautiful and mysterious "stratification condition". I will give an accessible account of this stratification condition, embed it into higher-order logic to help make sense of it, and then approach NF from the point of view of term-rewriting to note that it has some nice properties.

The interested reader can read a bit more in a paper just published in LMCS:

http://www.gabbay.org.uk/papers.html#lanssc

https://arxiv.org/abs/1705.07767

The language of Stratified Sets is confluent and strongly normalising.

Alternatively get the 25 Lothian bus and enjoy the walk over The Loch on campus.

This map shows the location of the Post Graduate building (SPLS location):

This is the postgraduate building: