Agents and affective systems

Thinking and feeling

Descartian mind-body dualism has had a powerful impact on AI as on so many other fields of intellectual enquiry. The idea that intelligence is identical to high-level reasoning can be seen in the early focus on games like chess and brain-teaser like problems, as in the General Problem Solver. This in turn was based on the IQ view of intelligence of 1950s America. If reason is "good" then emotion is "bad", so that computers are potentially superior to humans if they can only be equipped with a general problem solver but not messy human emotions.

From about the 1970s, this position has changed as research has found more and more interaction between cognition and emotion. Affect interacts with memory, attentional focus, goal management and problem-solving, so that as Minsky pointed out in "Society of Mind" in 1985: "the question is not whether intelligent machines can have any emotion, but whether machines can be intelligent without any emotions". Cognitive appraisal theory looked at the link the other way round, seeing cognition as an integral part of the emotion-creating process, seeing it as responding to the two following questions:

  1. Do I have a stake in this situation and if so am I at risk of harm/loss, threat or challenge? (primary appraisal);
  2. If primary appraisal suggests harm/loss, threat, or challenge, the question of secondary appraisal is: what can be done about this situation?

Psychologists who have worked on cognitive appraisal such as Lazarus and Frijda have had a substantial influence on work on Intelligent Virtual Agents (IVAs), though that of Ortony, Clore and Collins - the OCC model - has been the most widely implemented.

Descartian dualism was specifically addressed more recently by Antonio Damasio's book "Descartes' Error: Emotion, Reason, and the Human Brain" in 1994 which not only discussed the interaction between affect and cognition but stressed the importance of embodiment in Damasio's theory of somatic markers.

The tension between the cognitive appraisal account and the view from neurophysiology has not yet been resolved (Aylett 06), though there is a growing field called neurocognition which tries to relate cognitive finctions to the structure of the brain. Within work on agents (both graphical and robotic), some workers have developed an architecture in which emotion is part of a homeostatic control mechanism. Often incorporating a model of the endocrine system (Canamero 97) it suggests that emotion should be viewed as the set of brain-body changes resulting from the movement of the current active point of a brain-body process outside of an organism-specific 'comfort zone'. It does not therefore require a single meter-like component in an agent architecture to represent an emotion as is typical of implementations of appraisal theories, but offers a distributed representation interpretable in terms of the internal process states and external expressive behaviour as an emotion. Doerner's PSI theory is an interesting attempt to link a drive-based system to higher level constructs.

What role does affect play in agents?

Of course it is scientifically interesting to build affective architectures as an aspect of cognitive modelling. However the existence of affective systems in living creatures under the pressure of evolution suggests that it plays a number of functional roles (Aylett 04). One is as part of an action-selection mechanism that helps an agent to 'do the right thing' in a dynamic and social environment which also includes many other agents and human users. Emotion can be incorporated into action selection both indirectly and directly in much the same way as perception, and indeed can be thought of as functioning rather like an internal sensing process concerned with all the other running processes. It is also clear that emotion can act as a cheap short-term memory: the agent does not need to recall the details since the associated emotional state can act as a surrogate for it.

Affect can also be linked very closely to planning, and the work of Gratch has used the OCC definitions of hope and fear together with Laazarus' coping theory to produce an affective planner used in a number of synthetic agent-based systems at USC. The FearNot! agent architecture developed in the victec project also took this approach (Aylett et al 06).

Embodiment as communication

Embodiment has a number of consequences. It means that an IVA typically interacts with its virtual world through sensors, of greater or less elaboration, and requires a perceptual component in its architecture to handle this. It raises sometimes complex control issues as limbs, trunks and heads have to move in a competent and believable fashion. However as significant is the use of embodiment as a communication mechanism for the internal state of the agent, complementing its explicit communication mechanisms such as natural language.

One important reason for doing this is to support the continuing human process of inferring the intentions of an IVA - its motives and goals. This can help to produce a feeling of coherent action which is required for the user to feel that in some sense they 'understand' what an IVA is doing. A vital component of this process for the user is recognising the emotional state of the IVA and relating it to their own affective state. If in turn the IVA is able to recognise the affective state of the user and perform an equivalent integrative process, then one could speak of an 'affective loop' between user and IVA.

Superficially, expressive behaviour may seem to belong entirely to the domain of animation: after all, to move the features of a face, make a gesture, adopt a particular posture is, for graphical characters, a graphical problem. While this is true for pre- rendered animated characters in film, once a character has to interact in real-time the problem is no longer merely one of graphically displaying an affective state, but of generating the affective state in the first place. So an XML annotation used to invoke facial expressions has to be generated 'on the fly' if the IVA is to interact with a user. For this reason the problem of expressive behaviour can not in the end be separated from the problem of an affective agent architecture.


Aylett, R.S (pdf) (2004) Agents and affect: why embodied agents need affective systems Invited paper, 3rd Hellenic Conference on AI, Samos, May 2004 Springer Verlag LNAI 3025 pp496-504

Aylett, R.S, Dias, J and Paiva, A.(pdf) (2006) An affectively-driven planner for synthetic characters. Proceedings, ICAPS 2006, AAAI Press

Aylett, R.S (pdf)(2006) Emotion as an integrative process between non-symbolic and symbolic systems in intelligent agents. AISB workshop: GC5: Architecture of Brain and Mind: Integrating high level cognitive processes with brain mechanisms and functions in a working robot. AISB Symposia 2006

Canamero, D. 1997. A Hormonal Model of Emotions for Behavior Control. VUB AI-Memo 97-6, Free University of Brussels, Belgium. Presented as poster at the Fourth European Conference on Artificial Life (ECAL'97), Brighton, UK, July 28-3, 1997. (Gzipped postscript).

Agents and Affect





Visitors since 22/10/04
Last Update: 24 Sept 2006