Beyond Conditional LM: Neural Network Language Generation for Dialogue Systems
Tsung-Hsien (Shawn) Wen
University of Cambridge
2:15pm-3:15pm, 24 May 2016
EM G.44
Abstract
Conditional Language Modelling (LM) has shown promising results in many NLP research areas, such as machine translation, image caption generation, and natural language generation. The basic idea behind it is to condition a language model on external embedding-like resources to generate words one by one.
In the first part of this talk, I'm going to present to you the Semantically Conditioned LSTM (SC-LSTM), which can generate sentences when given a dialogue act representation. We will specifically focus on its domain scalability. In the second part of the talk we will turn the attention to our latest work, a neural network-based text-in, text-out end-to-end trainable dialogue system. I will show that the entire dialogue problem can be phrased as conditional language generation, in which belief states, database search results, and the user query embedding can be combined to form the conditional vector that drives the generation process.
Bio
Tsung-Hsien Wen (溫宗憲) is a PhD student in Dialogue Systems Group, University of Cambridge, under the supervision of Professor Steve Young. His research focus on using deep learning and distributional semantics to solve problems and bring domain scalability to statistical dialogue systems. The application areas he is particular interested in are statistical natural language generation (NLG), spoken language understanding (SLU), and end-to-end dialogue modelling. His PhD is supported by Toshiba Research Europe Ltd, Cambridge Research Laboratory. He is a current member of Darwin College.
Webpage:
http://mi.eng.cam.ac.uk/~thw28/
Email:
shawnwun@gmail.com