The workshop will start at 1pm of September 5 and close at 1pm of September 7.
Detailed program can be found here

September 6, 2012 – “Latent Variable Models for Time-Series and Image Understanding”, Prof Chris Williams (University of Edinburgh)
Abstract. In this digital age there is an ever-increasing abundance of raw data (such as images, data streams), and a great need for methods to extract meaningful structure from this data. It is natural to think in terms of latent (or hidden) variable models, where the patterns in the observed data are explained in terms of hidden factors (or “causes”). I will first discuss the Factorial Switching Linear Dynamical System (FSLDS) model for time series data, and its application to monitoring the condition of a premature baby receiving intensive care. Here there is a time-series of latent factors which are used to describe the state of health of the baby, and also patterns of artifact in the data. I will also discuss the use of an “X-factor” to model patterns which are clinically important, but are not explained by the other factors. In the second part of the talk I will discuss latent variable models for images, and focus on the Factored Shapes and Appearances (FSA) model for learning parts-based representations of object classes. I’ll show that the FSA model extracts meaningful parts from training data, and that its parameters and representation can be used to perform a range of tasks, including object parsing, segmentation and fine-grained categorization. (Joint work with: John Quinn, Neil McIntosh, Ali Eslami.)
Short bio. Chris Williams is Professor of Machine Learning in the School of Informatics, University of Edinburgh. He is interested in a wide range of theoretical and practical issues in machine learning, statistical pattern recognition, probabilistic graphical models and computer vision. He obtained his PhD (1994) from the University of Toronto, under the supervision of Prof Geoffrey Hinton. After a spell as a research fellow then Lecturer at Aston University, he moved to the University of Edinburgh in 1998, and was promoted to Professor in 2005. The book “Gaussian Processes for Machine Learning” (co-authored with Carl Rasmussen) was published in 2006. He was NIPS program co-chair in 2009, and is on the editorial boards of JMLR, IJCV and Proc Roy Soc A.

September 7, 2012 – “Can we put Intelligence into Computational Intelligence?”, Prof Leslie Smith (University of Stirling)
Abstract. Computational Intelligence is (currently!) the generic name for a group of areas including neural networks, artificial immune systems, genetic algorithms, fuzzy systems, and a variety of other techniques. These are all very useful non-algorithmic problem solving techniques. But how do they relate to actual intelligence, as exhibited by animals? One view is that we call “intelligence” all the things that we cannot do with computers – so chess once required intelligence, but doesn’t now, and pattern recognition (like letter recognition, or face discovery in a moving image) once required intelligence, but doesn’t now, so that what we call intelligence changes to exclude what machines can do! Yet, while CI techniques have gradually become capable of tasks previously solely carried out by intelligent (living) beings, there remains the very strong and nagging suspicion that there is something missing from CI systems that is present in actual living intelligent beings.
Does this necessarily lead us into awareness or consciousness research? Does this research have to include 1st person science? Or are there some useful aspects of intelligence that can be tackled, beyond what is generally done with neural networks and the like, but without the need for research on awareness and consciousness?
I will argue that we can learn a lot from the apparent dynamics of neural systems, and further, that by considering the role of time and temporal context in neural systems we can develop systems which, while not sentient as such, can display more of what we might consider everyday intelligence.