Speakers:
Actuarial Mathematics & Statistics | Computer Science | Mathematics | |
Staff | Matthias Fahrenwaldt | Verena Rieser | Des Johnston |
PhD Student | Bemsibom Toh | Eli Sheppard | Bernard Bainson |
Timetable:
Time | Speaker |
Title
Abstract |
14:15 | Verena Rieser |
How to make things talk? From Statistical Dialogue Systems to Social ChatBots
According to Microsoft's CEO Nadella "Bots are the new apps" because they "fundamentally revolutionize how computing is experienced by everybody." At the Interaction Lab we build natural language interfaces, which enable users to converse naturally with machines using text or voice. In this talk, I will compare task-based dialogue systems with social chatbots. I will briefly summarise our work on optimising task-based systems using Reinforcement Learning. I will then talk about the Amazon Alexa Challenge, which our team is currently competing in. The aim of the challenge is to build a social bot, which can naturally converse over a variety of topics. Note that this is considered to be an AI complete problem. I will review relevant approaches and discuss their limitations. |
14:45 | Bernard Bainson |
Khovanov's link invariant
The Khovanov homology is a knot invariant which first appeared in Khovanov's original paper of 1999, titled "A categorification of the Jones polynomial." In this talk I will introduce some basic concepts of knot theory and discuss the Jones polynomial via the Kauffman bracket. I will elaborate the construction using some basic examples. Next I will discuss the Khovanov homology in an analogous way to the Kauffman bracket. |
15:15 | Matthias Fahrenwaldt |
Some nonlinear differential equations in mathematical finance
In this talk we will present recent examples of nonlinear (partial) differential equations arising in the context of finance and economics. The first example treats the pricing of financial derivatives in illiquid markets where the derivative price can be characterised by a semilinear diffusion equation. The PDE, whose quadratic error term reflects the lack of liquidity in the market, has a weak solution and one can study the asymptotics as the market becomes perfectly liquid. The second example addresses the issue of optimal consumption/investment: consumers-investors maximize a ("global") forward looking non separable expected utility. This leads to a nonlinear Bellman equation and a corresponding verification theorem. If time permits, I will also present a third example which covers the relatively new topic of "cyber insurance". We model the spread of a cyber threat (e.g., a computer virus) along a graph and derive mean-field approximations for the moments of the infection probabilities in the form of a system of nonlinear ODEs. This allows the pricing of insurance contracts. |
15:45 | Snacks! | |
16:15 | Bemsibom Toh |
The Theory of Large Deviations
The theory of large deviations deals with the estimation of the probabilities of rare events. It has wide applications in different areas of mathematics, such as queueing theory, insurance mathematics and statistical mechanics. This talk will introduce the theory by use of motivating examples, and present some of the basic tools used to estimate such probabilities, as well as a few applications. |
16:45 | Des Johnston |
First order phase transitions - PhD students aren't always wrong
In general there is only one kind of finite size scaling at a first order phase transition (one with a latent heat), rather than the various universality classes found at second and higher order transitions. Since they are also harder to simulate, they have been rather overlooked when it comes to numerical simulations, in spite of being more prevalent in nature. We discuss some unexpected results for scaling in an Ising spin model which has a first order transition obtained by a PhD student as a warm-up project for his main research topic. The danger of assuming that such results from a PhD student are necessarily wrong is highlighted.... |
17:15 | Eli Sheppard |
Using natural language to recognise actions
This project uses deep learning techniques to ground visual information into natural language. The types of concepts being learnt include, object names, actions, and actor-object relationships. This work builds off of recent advances in image caption generation as well as unsegmented sequence classification. |
Plus a selection of foods and drinks for afterwards.