< May, 2013 >
The user interaction is central to the e-circus project. Through interaction with the system, the children get more involved, and a higher involvement enhances their empathy with the system and its characters. Furthermore, the interaction should promote the collaboration between the children.
Possible interaction forms are language (written or spoken), gestures or interaction devices. In the project, two interaction forms will be applied, language and mobile interaction styles. The language system analyses and generates text input, and also outputs the agents' communication through synthesized speech. In the second stage of the project also novel interaction techniques with mobile phones will be explored.
Language & Speech
The language system has three main tasks: analysing text input, generating a text representation of the agent's output, and synthesizing the agent's output.
In order to understand what the children type in, the semantics of the input have to be analysed from the text. Within e-circus this means to map a user utterance onto a so-called language act type. Thus, a complete analysis of the utterance is not necessary. However, simple keyword matching is insufficient, too, since preliminary user tests revealed that children partly do use nested sentence structures where references or negations are non-trivial to resolve. This is especially true for German, which is a language with relatively free word order.
For the analysis of text input we make use of the speech interpretation system SPIN  which provides an engine to process rules for parsing input utterances. This is advantageous since, as we have to deal with two languages, namely English and German, the language specific work should of course be minimised. With SPIN only the parsing rules have to be defined for each language, the processing is language-independent.
The task of output generation works the other way round: Starting from a language act type and possibly context variables that provide additional information, a text representation of an output utterance is created. This can as well be done by writing a set of rules for the SPIN system. The processing engine needs only slight modifications compared to the analysis.
The language system is only responsible for parsing and generation. The control of the dialog, i.e. which kind of language act is to be uttered, and the reaction in terms of the agents' behaviors or a change in the course of the story are the responsibility of the agent mind.
Mobile interaction styles
Children want to use technologies which support their curiosities, their love of repetition and their need for control as Druin points out in . Current input devices such as a keyboard and a mouse do not consider children’s special needs. Children want to interact in group and want to move around and interact with their real world. Thus, goal of our project is enabling ORIENT with a non-verbal multi user interaction which is quick and hands on to use but also curious to fits children’s need.
Thus, one of our primary goals is using physical objects of the real world surrounding children for interactions with the virtual world to close the gap between the two worlds. Mobile phones seem to be appropriate to fulfil lots of the requirements and therefore can act as interface between the two worlds. They are in mobile fashion and children can move around. Moreover, multi user interaction is quite simple using phone. Another benefit is, that mobile phone technology are matured techniques which offers lots of special features such as a camera, RFID reader or several network interfaces which is needed for real world interactions. Thus, all these benefits make a mobile phone to an interesting interaction device in human-computer-interaction. In our point of view mobile phones can be used as non-verbal interaction device for real world interactions required in ORIENT.
Currently we are searching for real world mobile interaction styles which fits best for ORIENT usage. One of these techniques is called touching. It enables real world interactions with physical objects which are augmented with RFID tags. Thus, using a mobile phone with built-in RFID reader, objects can easily be selected by simply touching them. This interaction offers a wide spectrum of scenarios. User could pick up a real world object to integrate it to the virtual world or signal any other event. Thus, former physical objects can become part of virtual world with natural and easy to use interactions. Other mobile interaction techniques might also be usable. They make use of users movements. Mobile phones with built-in accelerometers can determine users’ movement and use this information as input to an application. So, mobile phone can be used as pointing device instead of a mouse. Using these mobile interaction styles we consider increasing social interaction, involvement for virtual characters and collaboration between children.
 Engel, R. (2005): Robust and Efficient Semantic Parsing of Free Word Order Languages in Spoken Dialogue Systems. Interspeech 2005, Lisbon, Portugal.
 Druin, A.: The Design of Children’s Technology. Moran Kaufmann Publishers. 1999.
Created on 01/23/2007 11:06 AM by ecirweb
Updated on 01/23/2007 11:09 AM by ecirweb