Invited talks

This page contains a list of my invited talks, guest lectures, and tutorials, with slides available for download as PDF files in many cases. I have tried to correct the obvious typos that appeared in the original versions of these slides.

  2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2011 | 2010 | 2009 | 2007 | 2006 | 2004

2020

2019

2018

2017

2016

2015

2014

2013

2011

  • Knowledge-Level Planning with Incomplete Information
    A talk presented at the ICSR 2011 tutorial on Joint Action for Social Robotics: How to Build a Robot that Works Together with Several Humans, Amsterdam, Netherlands
    Abstract   Slides    2011-11-23
    The ability to plan is essential for intelligent agents acting (jointly or otherwise) in dynamic and incompletely known worlds, such as those that arise in JAMES (Joint Action for Multimodal Embodied Social Systems), an EU FP7 project exploring the problem of social interaction in multi-agent environments. Achieving goals under such conditions often requires complex forward deliberation rather than simply reacting to a situation without considering the long term consequences of actions. This tutorial talk gives an overview of automated planning techniques and describes in detail the PKS (Planning with Knowledge and Sensing) planner used in JAMES. We focus on the problem of planning with incomplete information and sensing actions, and demonstrate how joint actions for both physical tasks and interaction tasks can be planned using PKS.
  • Planning for Social Interaction in Embodied Robot Systems
    A talk presented at the Computational Linguistics Colloquium, Universität Potsdam, Germany
    Abstract   Slides    2011-11-21
    In recent years, developers of robot systems have begun to consider the social aspects of human-robot interaction: robots coexisting with humans must not only be able to carry out physical tasks in the world, but also be able to interact with humans in a socially appropriate manner. However, achieving this goal requires endowing a robot with the ability to recognise, understand, and generate a range of multimodal social signals (e.g., gesture, facial expression, language, etc.), in order to interpret and respond to humans in a realistic way.

    JAMES (Joint Action for Multimodal Embodied Social Systems) is a new European Commission-funded project (coordinated by the University of Edinburgh), exploring the problem of social interaction in human-robot environments. JAMES aims to develop a socially intelligent robot that combines task-based behaviour with the ability to understand and respond to a wide range of embodied, multimodal, communicative signals in a socially appropriate manner. In particular, JAMES takes a multidisciplinary approach to this problem, combining computer vision, natural language processing, machine learning, automated planning, and human-human studies on social interaction, to develop an embodied robot system that supports realistic, multi-party interactions in a bartending scenario.

    In this talk, I will present an overview of the JAMES project and highlight its main research themes, with emphasis on the role of automated planning and reasoning in the project. In particular, I will discuss how general purpose knowledge-level planning techniques are used to generate high-level plans that mix physical robot operations with certain types of social behaviour (e.g., speech act-based human-robot interaction), by viewing the plan generation task as an instance of the problem of planning with incomplete information and sensing actions.
  • JAMES: Joint Action for Multimodal Embodied Social Systems
    A talk presented to the Dialogue and Interaction Working Group (DIG), School of Informatics, University of Edinburgh, UK
    Abstract   Slides    2011-02-14
    In recent years, robot developers have begun to consider the social aspects of robot behaviour: a robot coexisting with humans must not only be able to successfully carry out physical tasks in the world, but must also be able to interact with humans in a socially appropriate manner. Achieving this behaviour requires endowing a robot with the ability to recognise, understand, and generate multimodal social signals (e.g., gesture, facial expression, language, etc.) in order to interpret and respond to humans in a realistic manner.

    JAMES (Joint Action for Multimodal Embodied Social Systems) is a new EU FP7 project (coordinated by the University of Edinburgh) exploring the problem of social interaction in multi-agent environments. JAMES aims to develop a socially intelligent robot that combines task-based behaviour with the ability to understand and respond to a wide range of embodied, multimodal, communicative signals in a socially appropriate manner. To do this, JAMES plans to apply the results of studies into human social communicative behaviour, together with the development of new technical components for a humanoid robot, with the goal of demonstrating the resulting system in a bartending scenario with realistic, open-ended, multi-party interactions.

    In this talk, I will present an overview of JAMES and highlight its main research themes, with particular emphasis on the role of automated planning and reasoning in the project.
  • Planning for Natural Language: Instruction Giving, Robot Dialogue, and Social Interaction
    A talk presented at the Department of Computer Science, Heriot-Watt University, Edinburgh, UK
    Abstract   Slides    2011-01-12
    Natural language generation and dialogue both have long traditions as application areas of automated planning systems. While current mainstream approaches have largely ignored the planning approach to natural language, several recent research efforts have sparked a renewed interest in this area.

    In this talk, I will present ongoing work aimed at applying planning techniques to domains inspired by problems in natural language. In particular, I will describe the GIVE (Generating Instructions in Virtual Environments) domain and a robot dialogue domain from the EU PACO-PLUS project. In GIVE, we model the underlying instruction-giving task as an instance of the classical planning problem, and investigate the efficiency of current off-the-shelf planners like FF and SGPLAN in this domain. In PACO-PLUS, we use the knowledge-level PKS planner and treat the problem of generating sequences of speech acts as the more general problem of planning with incomplete information and sensing actions.

    I will also briefly discuss JAMES (Joint Action for Multimodal Embodied Social Systems), a new EU-funded FP7 project focusing on social interaction in multiagent environments, and the role of planning in this project.

    This talk describes joint work with Alexander Koller (Universität des Saarlandes) on GIVE, Mark Steedman (University of Edinburgh) on PACO-PLUS, and members of the JAMES consortium.

2010

  • Planning and Reasoning for Robot Vision
    A tutorial course presented at Aalborg University Copenhagen, Ballerup, Denmark,
    Abstract   Slides    2010-09-17
    In this tutorial I present an introduction to automated planning, including classical planning, belief space planning, and knowledge-level planning. The primary focus of the course is representation, in particular, how different types of planning problems affect the plan generation process. The tutorial is divided into two main parts. In the first part, I review standard approaches to classical STRIPS-style planning, and provide an overview of popular planning methods and planning software. I also present examples of simple planning domains in PDDL. In the second part of the course, I give an introduction to belief space and knowledge-level planning, with a focus on planning with incomplete information and sensing actions. Examples are primarily presented using the representation provided by the PKS planner. I also highlight a number of problems related to the plan generation problem, including plan execution monitoring and action model learning. Throughout the tutorial, a series of planning examples will be presented from a variety of robot and software systems.
  • Planning for Natural Language: Adventures in Instruction Giving and Robot Dialogue
    A talk presented at the Department of Computer Science, Rutgers University, New Brunswick, New Jersey, USA
    Abstract   Slides    2010-09-08
    Natural language generation and dialogue both have long traditions as application areas of automated planning systems. While current mainstream approaches have largely ignored the planning approach to natural language, several recent research efforts have sparked a renewed interest in this area.

    In this talk, I will describe ongoing work on applying planning techniques to two novel domains inspired by problems in natural language: the GIVE (Generating Instructions in Virtual Environments) domain, and a robot dialogue domain from the EU PACO-PLUS project. In GIVE, we model the underlying task problem as an instance of the classical planning problem, and investigate the efficiency of current off-the-shelf planners like FF and SGPLAN in this domain. In PACO-PLUS, we use the knowledge-level PKS planner and treat the problem of generating sequences of speech acts as the more general problem of planning with incomplete information and sensing actions. Overall, our results are mixed: while some planners perform well on particular problem instances, others pose a challenge for the current generation of automated planners, making these problems suitable benchmarks for future research.

    This talk describes joint work with Alexander Koller (Universität des Saarlandes) on GIVE, and Mark Steedman (University of Edinburgh) on PACO-PLUS.
  • Planning for Natural Language: Experiments in Instruction Giving and Robot Dialogue
    A talk presented to the Dialogue and Interaction Working Group (DIG), School of Informatics, University of Edinburgh, UK
    Abstract   Slides    2010-06-10
    Natural language generation and dialogue both have long traditions as application areas of automated planning systems. While current mainstream approaches have largely ignored the planning approach to natural language, several recent publications have sparked a renewed interest in this area.

    In this talk, I will describe ongoing work on applying planning techniques to two novel domains inspired by problems in natural language: the GIVE (Generating Instructions in Virtual Environments) domain, and a robot dialogue domain from the PACO-PLUS project. In GIVE, we treat the underlying task problem as an instance of classical planning, and investigate the efficiency of current off-the-shelf planners like FF and SGPLAN in this domain. In PACO-PLUS, we use the knowledge-level PKS planner and view the problem of generating a sequence of speech acts as the more general problem of planning with incomplete information and sensing actions. Overall, our results are mixed: while some planners perform well on particular problem instances, others pose a challenge for the current generation of automated planners.

    This talk describes joint work with Alexander Koller (GIVE) and Mark Steedman (PACO-PLUS).
  • Representations for Classical and Knowledge-Level Planning
    A tutorial presented at the CogX Spring School, Ljubljana, Slovenia
    Abstract   Slides    2010-04-25
    In this talk I present an introduction to automated planning, including classical planning, belief space planning, and knowledge-level planning. The primary focus of the talk is representation, in particular, how different types of planning problems affect the plan generation process. This talk is divided into three parts. In the first part of the talk, I review standard approaches to classical STRIPS-style planning, and provide an introduction to belief space and knowledge-level planning. In the second part of the talk, I describe PKS (Planning with Knowledge and Sensing), a knowledge-level planner that is able to construct plans with incomplete information and sensing actions. In the last part of the talk, I highlight a number of applications of planning to natural language generation and dialogue, and mention some areas of research closely related to the plan generation problem.
  • Adapting Knowledge-Level Planning for Natural Language Dialogue
    A talk presented at the Department of Computer and Information Sciences, University of Strathclyde, Glasgow, UK
    Abstract   Slides    2010-03-19
    The problem of planning a sequence of natural language dialogue moves has many parallels to the general AI planning problem, and can be viewed as an instance of the problem of planning with incomplete information and sensing. While "classical" planning actions model actions that change the state of the world, sensing actions change the knowledge state of the agent, often leaving the world state unchanged. Sensing actions also complicate the planning process, giving rise to potentially infinite state spaces and requiring planning systems with the ability to reason about an agent's knowledge state as distinct from the world state. Recent approaches from the knowledge representation and planning communities, however, have been effective at overcoming some of the representational and inferential drawbacks of sensing actions, and such actions have been applied in a variety of domains.

    In this talk, I describe ongoing work aimed at adapting automated planning techniques for natural language dialogue. I give an overview of PKS (Planning with Knowledge and Sensing), a contingent planner that forms the basis of our approach. Unlike traditional planners, PKS operates at the "knowledge level" to generate plans using sensing actions and incomplete information, by reasoning about the planner's knowledge state rather than the world state. PKS also supports features like functions, run-time variables, and program-like constructs, as part of its action representation. Building on ideas from the knowledge representation and planning communities, I describe a set of extensions to PKS for speech act-based dialogue, illustrating that the same underlying techniques for ordinary action planning can be applied to dialogue planning.

    This talk describes joint work with M. Steedman and is presented in the context of PACO-PLUS, an EU project investigating perception, action, and language in real-world robot environments.

2009

  • Dialogue as Planning with Knowledge and Sensing
    A talk presented at the Workshop on Situated Understanding of Intention, University of Pennsylvania, Philadelphia, USA
    Abstract   Slides    2009-07-23
    The problem of planning a sequence of dialogue moves---and hence the problem of recognising the intention behind partially-observable sequences of dialogue actions---can be viewed as an instance of the more general AI problem of planning with incomplete information and sensing. In contrast to "classical" planning actions, which model changes to the state of the world, sensing actions change the knowledge state of the agent while leaving the world state unchanged. As a result, such actions complicate the traditional planning process: sensing actions give rise to large or potentially infinite state spaces, and require a planner that can reason about its knowledge state as distinct from the world state.

    This talk gives an overview of PKS (Planning with Knowledge and Sensing), a conditional planner that can generate plans with sensing actions and incomplete information (Petrick & Bacchus 2002, 2004). Unlike traditional planners, PKS operates at the "knowledge level", by reasoning about the knowledge state, rather than the world state. PKS also supports features like functions, run-time variables, and program-like constructs, as part of its action-representation. Using recent results from the knowledge representation and planning communities, this talk presents a series of extensions to PKS, with the goal of adapting PKS to speech act-based dialogue planning. This talk also describes the role of PKS in PACO-PLUS, an investigation of interacting perception, action, and cognition in robot systems, illustrating how the same underlying techniques for ordinary action planning in robot domains transfer to dialogue planning.

    This talk describes joint work with M. Steedman.
  • (Dialogue) Planning with Knowledge and Sensing
    A talk presented at the Universität des Saarlandes, Saarbrücken, Germany
    Abstract   Slides    2009-03-05
    The problem of planning dialogue moves can be viewed as an instance of the more general AI problem of planning with incomplete information and sensing. While ordinary planning actions (such as those used by classical planners) typically change the state of the world, sensing actions change the knowledge state of the agent and often leave the world state unchanged. Sensing actions also complicate the planning process, since such actions require the ability to reason about an agent's knowledge state, and give rise to potentially infinite state spaces. Recent approaches from the knowledge representation and planning communities, however, have been effective at overcoming some of the representational and inferential drawbacks of sensing actions, and such actions have been applied in a variety of domains.

    In this talk, I will give an overview of PKS (Planning with Knowledge and Sensing), a contingent planner that is able to construct plans with sensing actions and incomplete information. PKS works at the "knowledge level" by reasoning about how the planner's knowledge state, rather than the world state, changes as a result of action. PKS also supports features like functions, run-time variables, and simple program-like constructs as part of its action representation language. I will also report on work currently in progress that aims to use PKS directly as a platform for speech act-based dialogue planning.

    This talk describes joint work with M. Steedman that builds on an approach first presented in (Steedman & Petrick 2007).

2007

  • A Knowledge-Level Approach to Dialogue Planning
    A talk presented at the Institute for Communicating and Collaborative Systems (ICCS) Seminar Series, University of Edinburgh, UK
    Abstract   Slides    2007-10-12
    The problem of planning dialogue moves can be viewed as an instance of the more general AI problem of planning with incomplete information and sensing. Sensing or knowledge-producing actions complicate the planning process since such actions require reasoning about an agent's knowledge state, and give rise to potentially infinite state spaces. Recent approaches from the knowledge representation and planning communities, however, have been effective at representing and reasoning with sensing actions in a variety of scenarios.

    In this talk, I report on an approach to planning dialogue actions based on intuitions from the PKS (Planning with Knowledge and Sensing) planner (Petrick & Bacchus 2002, 2004), a "knowledge-level" conditional planner that is able to construct plans with sensing actions under conditions of incomplete information. I focus primarily on a set of extensions to the Linear Dynamic Event Calculus (LDEC) [Steedman 1997, 2002], inspired by PKS, and show how LDEC can be applied to the problem of planning mixed-initiative collaborative discourse. I also describe some preliminary work that aims to use PKS directly as a platform for dialogue planning.

    This talk describes joint work with M. Steedman, based on (Steedman & Petrick 2007).

2006

  • An Overview of the Linear Dynamic Event Calculus (LDEC) and STRIPS-style Planning
    A short presentation I gave in Göttingen, Germany
    Abstract   Slides    2006-11-23
    The Linear Dynamic Event Calculus (LDEC) is a logical formalism for modelling actions and change, based on ideas from the event calculus, situation calculus, STRIPS planning representation, as well as dynamic and linear logic. In this talk I will give a brief overview of LDEC and show how it can be used to encode simple planning domains. I will then provide a short introduction to STRIPS and describe how a set of restricted LDEC axioms can easily be compiled into STRIPS-style planning operators, usable by many modern planning systems.

2004

  • Extending the Knowledge-Based Approach to Planning with Incomplete Information and Sensing
    A talk presented at the Cognitive Robotics Group Seminar, Department of Computer Science, University of Toronto, Canada
         2004-06-15  
    In (Petrick & Bacchus 2002), "knowledge-level" approach to planning under incomplete knowledge and sensing was presented. In comparison with alternate approaches based on representing sets of possible worlds, this higher-level representation is richer, but the inferences it supports are weaker. Nevertheless, because of its richer representation, it is able to solve problems that cannot be solved by alternate approaches.

    In this talk we examine a collection of new techniques for increasing both the representational and inferential power of the knowledge-level approach. These techniques have been incorporated into the PKS (Planning with Knowledge and Sensing) planning system. Taken together they allow us to solve a range of new types of planning problems under incomplete knowledge and sensing.

    This talk describes joint work with F. Bacchus, based on (Petrick & Bacchus 2004).
          Twitter
Valid HTML & CSS. Adapted from design by HTML5 UP. Updated by Ron Petrick 2020-04-25.