SemDial 2014 - DialWatt

DialWatt

THE 18th WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE

The Conference programme is now available

The Official Conference Proceedings are now available.

The SemDial series of workshops (http://www.illc.uva.nl/semdial) brings together researchers working on the semantics and pragmatics of dialogue in fields such as artificial intelligence, computational linguistics, formal semantics/pragmatics, philosophy, psychology, and neuroscience.

Delegates will receive a complimentary rewards card enabling them to better enjoy the vibrant city of Edinburgh.

In 2014 the workshop will be hosted by the Interaction Lab (link), Heriot-Watt University. DialWatt will be collocated with RO-MAN 2014 (http://rehabilitationrobotics.net/ro-man14/).

Programme

DialWatt - SemDial 2014

EDINBURGH, SEPTEMBER 1-3, 2014

Location


View Larger Map

Accommodation

Note that there is no accommodation available close to the workshop location at Heriot-Watt, so we suggest that you stay in town and take a bus to the conference. The following are some hotel suggestions:

Best Western PLUS Bruntsfield.

We have reserved a block of rooms at the Best Western PLUS Bruntsfield Hotel on a first-come, first-served basis. The price is £101 per night single occupancy, including breakfast. Note that this special rate cannot be booked online; instead, bookings should be made by telephone (+44 (0)131 229 1393) or by email to reservations@thebruntsfield.co.uk. Please quote the block code WATT0109.

You can travel from the Best Western PLUS Bruntsfield to the conference venue at Heriot-Watt using the Lothian Buses bus #45.

Other suggested hotels in Edinburgh

Detailed below are the hotels that the Heriot-Watt School of MACS regularly uses for visitors to Edinburgh. This list is provided for information only. Any workshop attendee wishing to book one of these hotels should mention they are attending MACS/HWU for a workshop, and it is possible that in doing so they will be offered a reduced rate or a better room at the standard rate; however, this is entirely at the discretion of the hotel. This list includes an indication of the MACS/HWU rate. Please note that you want to be near Lothian Buses bus #25, bus #34 or bus #45.

Hotels:

Other options

If you want to do your own search for hotels, we suggest that you stay near Haymarket rail station, from which you can easily access the conference venue at Heriot-Watt using the #25 Lothian bus in under half an hour. As a starting point, you can use this Google map of “Hotels near Haymarket”: http://goo.gl/3DZpTd

Hotels in the downtown (Princes Street) are would also be an option, although the bus ride would be a bit longer.

To find us by bus

Lothian Buses services 25, 34 and 45 run between the city centre and the James Watt Centre on the campus. All these bus routes terminate at the campus, so you can simply sit on the bus until it is clearly no longer going anywhere. Exact fare must be tendered - at present (June 2013) the fare is £1.50 from the city centre, on any route, or you can get a day ticket for £3.50 which you can use repeatedly, on any route. Day tickets can be purchased online in advance of your visit.

The number 25 runs along Princes Street (by Waverley Station) and also serves Haymarket Station. The journey time to Riccarton is approximately 40 minutes from Waverley Station and 30 minutes from Haymarket. Between 8.00 and 18.30 the 25 runs roughly every 10 minutes.

The 34 also runs between Princes Street (by Waverley Station) and Riccarton Campus. The journey time to campus is approximately 45 minutes from Waverley Station.

The 45 runs from St Andrew Square (close to the Bus Station), via the Mound and Tollcross to Riccarton campus, Mondays to Fridays only. The journey time to Riccarton is approximately 45 minutes from St Andrew Square and
30 minutes from Tollcross.

If you are coming to campus for an evening event, the X25 express bus is the fastest option. It leaves from Princes Street (by Waverley Station) between 16.10 and 18.15.

Night buses are also available: The N25 leaves from Waverley Steps on Princes Street and runs via Haymarket.

Bus timetables and up-to-date information can be obtained from Lothian Buses (click on 'Find your bus' then 'timetables', or use the 'Choose a timetable' drop-down in the top-right).

Related Events

Confirmed invited speakers:

Programme

ANNOUNCEMENT and CALL FOR PAPERS


Semdial 2014 - DialWatt

THE 18TH WORKSHOP ON THE SEMANTICS AND PRAGMATICS OF DIALOGUE

1-3 September 2014 Edinburgh

http://www.macs.hw.ac.uk/InteractionLab/Semdial/

DialWatt will be the 18th edition of the SemDial workshop series, which aims to bring together researchers working on the semantics and pragmatics of dialogue in fields such as formal semantics and pragmatics, computational linguistics, artificial intelligence, philosophy, psychology, and neuroscience.

In 2014 the workshop will be hosted by the Interaction Lab (https://sites.google.com/site/hwinteractionlab/), Heriot-Watt University.

It will be collocated with Ro-Man (The 23rd IEEE Symposium on Robot and Human Interactive Communication, http://rehabilitationrobotics.net/ro-man14/) and the world-famous Edinburgh festival (http://www.edinburghfestivals.co.uk/planning-for-edinburgh-festivals-2014).

INVITED SPEAKERS:

IMPORTANT DATES:

ABSTRACT SUBMISSIONS:

We invite abstracts on ongoing projects or system demonstrations on all topics related to the semantics and pragmatics of dialogue, including, but not limited to:

SUBMISSIONS:

Authors should submit an abstract of 2 pages of content (1 additional page is allowed for references), please note that Abstract submission is not anonymous and the author(s) name and affliation should be included ij the submission. Formatting instructions and the URL of the submission site are availableon the dialWatt website. Abstract submissions will not be refereed, but evaluated for relevance only by the chairs. As such, papers do not need to be anonymised. They will be presented as posters at the workshop.

LONG PAPER SUBMISSIONS (closed)

We invite papers on all topics related to the semantics and pragmatics of dialogue, including, but not limited to:

SUBMISSIONS:

Authors should submit an *anonymous* paper of at most 8 pages of content (up to 2 additional pages are allowed for references). Formatting instructions and the URL of the submission site are available on the DialWatt website.

TECHNICAL PROGRAMME CHAIRS:

LOCAL ORGANISATION:

Arash Eshghi (General Chair), Mary Ellen Foster (Local Organiser), Andy Taylor (Web Development).

PROGRAMME COMMITTEE:

Nicholas Asher, Timo Baumann,Luciana Benotti, Nate Blaylock, Holly Branigan, Valeria De Paiva, David Devault, Arash Eshghi, Raquel Fernández, Victor Ferreira, Kallirroi Georgila, Jonathan Ginzburg, Eleni Gregoromichelaki, Markus Guhe, Pat Healey, Anna Hjalmarsson, Amy Isard, Simon Keizer, Ruth Kempson, Alexander Koller, Pierre Lison, Staffan Larsson, Alex Lascarides, Colin Matheson, Gregory Mills, Chris Potts, Laurent Prévot, Matthew Purver, Hannes Rieser, David Schlangen, Gabriel Skantze, Amanda Stent, Matthew Stone, David Traum, Nigel Ward [further committee members tba]

SEMDIAL BOARD CHAIRS:

Raquel Fernandez (University of Amsterdam), David Schlangen (Bielefeld University), http://www.illc.uva.nl/semdial/

Submissions

We Accept both Full Papers and Abstract Submissions. Please make sure you use these LaTeX style files or Microsoft Word style files.

Please upload the final version via the EasyChair Submissions Site

Registration

To Register please use the following link:

https://www.eventbrite.co.uk/e/semdial-2014-dialwatt-tickets-11428712587

Contact

For questions regarding submissions, please contact the technical PC chairs:

Verena Rieser & Phillipe Muller

For questions concerning registration and participation, please contact the local organisers:

Arash Eshghi or Mary Ellen Foster

Social Programme

Celidh

Holly Branigan (Professor for Psychology, University of Edinburgh)

Say as I say: Alignment as a multi-componential phenomenon

Converging evidence from an ever-increasing number of experimental and observational studies suggests that people converge many aspects of their language (and other behaviour) when they interact. What is less clear is why such alignment occurs, and the function that it plays in communication. Discussions of individual instances of alignment have tended to appeal exclusively to one of three explanatory frameworks, focusing on social relationships between interacting agents, strategic maximisation of mutual understanding, or automatic linguistic priming behaviours. Each framework can satisfactorily explain some observed instances of alignment, but appears inadequate to explain others.

I will argue that alignment behaviours are best characterised as multi-componential, such that all three kinds of mechanism may potentially and simultaneously contribute to the occurrence of alignment, with the precise contribution of each depending upon the context and aspect of language under observation. However, evidence from studies of typically developing children and speakers with Autistic Spectrum Disorder suggest that a tendency to align language may be in some sense ‘wired in’ at a very basic level, and that both the ability to suppress this reflex and the ability to strategically exploit alignment for social or communicative ends may be later acquired and superimposed on top of this basic and reflexive tendency.

Michael Schober (Professor of Psychology, New School for Social Research)

Dialogue, response quality and mode choice in iPhone surveys

As people increasingly communicate via mobile multimodal devices like iPhones, they are becoming accustomed to choosing and switching between different modes of interaction: speaking and texting, posting broadcast messages to multiple recipients on social media sites, etc. These changes in everyday communication practices create new territory for researchers interested in understanding the dynamics of dialogue.

This talk will describe studies of 1200+ survey respondents answering survey questions from major US social surveys, either via voice vs. SMS text (native iPhone apps) and either with human vs. automated interviewers; because the studies contrast whether the interviewing agent is a person or automated and whether the medium of communication is voice or text, we can isolate effects of the agent and the medium.

The studies measure completion rates, respondent satisfaction and response quality when respondents could and could not choose a preferred mode of responding; response quality was measured by examining “survey satisficing” (taking shortcuts when responding—providing estimated or rounded vs. precise numerical answers, and “straightlining”—providing the same responses to multiple questions in an undifferentiated way), reports of socially desirable and sensitive behaviors, and requests for clarification.

Turn-taking structure in text vs. voice is, of course, vastly different, with notably longer delays between turns in the asynchronous text modes, and greater reported multi-tasking while texting; and there were some notable differences in texting and talking with human vs. automated interviewers/interviewing systems. But the overall findings are extremely clear: notably greater disclosure of sensitive/embarrassing information in text vs. voice, independent of whether the interviewer is human or automated; and less estimation/rounding in text vs. voice, again independent of whether the interviewer is human or automated.

The opportunity to choose a mode of interviewing led to improved satisfaction and improved response quality, with more respondents choosing text than voice. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. Survey interviews are a very particular kind of dialogue with particular constraints, but they are a useful laboratory for deeper understanding of the dynamics and pragmatics of dialogue.

Jon Oberlander (Professor of Epistemics in the University of Edinburgh)

Talking to animals and talking to things

I will argue that to build the diverse dialogue systems that will help us interact with and through the Internet of Things, we need to draw inspiration from the dizzying variety of modes of human-animal interaction. The Internet of Things (IoT) has been defined as “the set of technologies, systems and methodologies that underpins the emerging new wave of internet-enabled applications based on physical objects and the environment seamlessly integrating into the information network”. Although there is a technical view that the IoT will not require any explicit interaction from humans, it plausible to assume that we will in fact need to develop appropriate mechanisms to translate, visualise, access and control IoT data. We thus need to develop new means for humans to have ‘words with things’. Some building blocks are already in place.

Back in 2006, Bleecker proposed the ‘blogject’, an object that tracks and traces where it is and where it’s been, has an embedded history of its encounters and experiences, and possesses some form of agency, with an assertive voice within the social web. In the last four years, this vision has been brought closer to reality through significant work on the “social web of things”. But something is missing. The IoT will surely contain a huge variety of things, some with real intelligence and flexibility, and others with only minimal agency; some we will want to talk to directly; others will be too dull to hold a conversation with. Ever since Shneiderman’s advice to the HCI community, we have struggled with the idea that if a system can sustain a multi-step dialogue, it must have human-level intelligence.

So, in developing new ways to interact with the pervasive IoT, we must look beyond human-human interaction for models to guide our designs. Human-pet interaction is an obvious starting point, as in the work of Ljungblad and Holmquist, and recent projects on robot companions have already developed this line of thinking. However, pets represent just one point on the spectrum of human-animal interaction. Animals vary from wild, to feral, to farmed or caged, to working, through to domestic. Their roles include: companions (e.g. pets), providing aid and assistance (e.g. guide dogs), entertainment (e.g. performing dolphins), security (e.g. guard dogs), hunting (trained predators pursuing untrained prey), food (e.g. livestock), and scientific research participants (e.g. fruitflies). If we take into account the types and roles of the animals with which humans already interact, we can take advantage of existing understanding of the breadth of human-animal interaction, and evolve a rich ecosystem of human-thing dialogue systems.

Matthew Purver (Senior Lecturer, Cognitive Science Research Group, Queen Mary, University of London)

Ask Not What Semantics Can Do For Dialogue - Ask What Dialogue Can Do For Semantics

Semantic frameworks and analyses are traditionally judged by sentential properties: e.g. truth conditions, compositionality, entailment. A semantics for dialogue must be consistent not only with these intrinsic properties of sentences, but with extrinsic properties: their distribution, appropriateness or update effects in context.

The bad news, of course, is that this means our analyses and frameworks have to do more, and fulfilling these requirements has been the aim of a great deal of productive and influential research. But the good news is that it also means that dialogue can act as a "meaning observatory", providing us with observable data on what things mean and how people process that meaning -- data which we can use both to inform our analyses and to learn computational models.

This talk will look at a few ways in which we can use aspects of dialogue --- phenomena such as self- and other-repair, situation descriptions, the presence and distribution of appropriate and informative responses --- to help us choose, learn or improve models of meaning representation and processing. This talk describes joint work with a number of colleagues, but particularly Julian Hough, Arash Eshghi and Jonathan Ginzburg.

Semdial'14 Accepted Long Papers

Aimilios Vourliotakis, Ioannis Efstathiou and Verena Rieser Detecting Deception in Non-Cooperative Dialogue: A Smarter Adversary Cannot be Fooled That Easily
Angela Nazarian, Elnaz Nouri and David Traum Initiative Patterns in Dialogue Genres
Barbara Lewandowska-Tomaszczyk Language-bound Dialogic elements in Computer-Mediated and Face-to-Face Communication
Callum Main, Zhuoran Wang and Verena Rieser Towards Deep Learning for Dialogue State Tracking Using Restricted Bolzman Machines and Pretraining
Casey Kennington, Spyros Kousidis and David Schlangen Multimodal Dialogue Systems with InproTKs and Venice
Chiara Mazzocconi, Sam Green and Ye Tian Laughter in mother-child interaction: from 12 to 36 months
Florian Hahn, Insa Lawler and Hannes Rieser First observations on a corpus of multi-modal trialogues
Gesa Schole, Thora Tenbrink, Kenny Coventry and Elena Andonova Tailoring Object Orientation Descriptions to the Dialogue Context
Gibson Ikoro, Raul Mondragon and Graham White Disentangling utterances and recovering coherent multi party distinct conversations
Helen Hastie, Marie-Aude Aufaure, Panos Alexopoulos, Hugues Bouchard, Heriberto Cuayahuitl, Nina Dethlefs, Milica Gasic, James Henderson, Oliver Lemon, Xingkun Liu, Peter Mika, Tim Potter, Verena Rieser, Pirros Tsiakoulis, Yves Vanrompay, Boris Villazon-Terrazas, Majid Yazdani, Steve Young and Yanchao Yu Two Alternative Frameworks for Deploying Spoken Dialogue Systems to Mobile Platforms for Evaluation In the Wild
Ioannis Efstathiou and Oliver Lemon Learning to manage risks in non-cooperative dialogues
Iwan de Kok, Julian Hough, Cornelia Frank, David Schlangen and Stefan Kopp Dialogue Structure of Coaching Sessions
Jaroslaw Lelonkiewicz and Chiara Gambi Common Ground and Joint Utterance Production: Evidence from the Word Chain Task
Joao Cabral, Nick Campbell, Sree Ganesh, Mina Kheirkhah, Emer Gilmartin, Fasih Haider, Eamonn Kenny, Andrew Murphy, Neasa Nashy, Chiarin, Thomas Pellegrini and Odei Rey DEMO - MILLA: A Multimodal Interactive Language Learning Agent
Jonathan Ginzburg, David Schlangen, Ye Tian and Julian Hough The Disfluency, Exclamation, and Laughter in Dialogue (DUEL) Project
Julian Schlöder and Raquel Fernandez Clarification Requests at the Level of Uptake
Matthias Kerzel, Ozge Alacam, Christopher Habel and Cengiz Acarturk Producing Verbal Descriptions for Haptic Line-Graph Explorations
Mei Yii Lim, Mary Ellen Foster, Srinivasan Janarthanam, Amol Deshmukh, Helen Hastie and Ruth Aylett Studying the Effects of Affective Feedback in Embodied Tutors
Nadine Glas and Catherine Pelachaud Hearer Engagement as a Variable in the Perceived Weight of a Face-Threatening Act
Niels Schuette, John Kelleher and Brian MacNamee Perception Based Misunderstandings in Human-Computer Dialogues
Nina Dethlefs, Heriberto Cuayahuitl, Helen Hastie, Verena Rieser and Oliver Lemon Getting to Know Users: Accounting for the Variability in User Ratings
Noortje Venhuizen and Harm Brouwer PDRT-SANDBOX: An implementation of Projective Discourse Representation Theory
Peter Wallis User Satisfaction without Task Completion
Robert Grimm and Raquel Fernandez Assessing the Impact of Local Adaptation in Child-Adult Dialogue: A Recurrence-Quantificational Approach
Staffan Larsson, Simon Dobnik and Sebastian Berlin Effects of Speech Cursor on Visual Distraction in In-vehicle Interaction: Preliminary Results
Ting Han, Spyros Kousidis and David Schlangen Towards Automatic Understanding of Virtual Pointing in Interaction
Verena Rieser and Amanda Cercas Curry Towards Generating Route Instructions Under Uncertainty: A Corpus Study
Vivien Mast, Daniel Couto Vale, Zoe Falomir, Mohammed Fazleh Elahi Referential Grounding for Situated Human-Robot Communication
Verena Rieser, Srinivasan Janarthanam, Andy Taylor, Yanchao Yu and Oliver Lemon SpeechCity: A Conversational City Guide based on Open Data
Wenshuo Tang, Zhuoran Wang, Verena Rieser and Oliver Lemon Sample Efficient Learning of Strategic Dialogue Policies
Yasuhiro Katagiri, Katsuya Takanashi, Masato Ishizaki, Mika Enomoto, Yasuharu Den and Shogo Okada A Multi-issue Negotiation Model of Trust Formation through Concern Alignment in Conversations
Zengtao Jiao, Zhuoran Wang, Guanchun Wang, Hao Tian, Hua Wu and Haifeng Wang Large-scale Analysis of the Flight Booking Spoken Dialog System in a Commercial Travel Information Mobile App