BMVC 2006 
Edinburgh 4-7 September





Speakers information

Workshop information



Speakers info

BMVC 2006 conference guest speakers:

Sep 4 Maria Petrou (Imperial College, London)
  Human vision system and data processing
  Abstract: The tutorial will cover first some revision of our understanding of the workings of the human vision system, starting from the retina and going to the visual cortex. Then three levels of data processing will be considered.
1) The sampling pattern of the retina is not a rectangular grid.
Research problem: Dealing with irregularly sampled data. How do you compute the Fourier transform of such data? How do you get texture features? How can you construct from such data displays on a regular grid?
2) The part of the visual cortex known as V1 has as a role to create saliency maps from the viewed scene. What can we learn from that?
How such saliency maps can be produced and used for further processing?
How can we emulate such a process using a network of inter-connected neurons?
3) How is information stored in the brain? What is a network of ideas and what is network topology? How can we deduce it from data collection?
Sep 5 David J. Kriegman (University of California, San Diego)
  Vision in the Small: Reconstructing the Structure of Protein Macromolecules from Cryo-Electron Micrographs
  Abstract: Single particle reconstruction using Cryo-Electron Microscopy (cryo-EM) is an emerging technique in structural biology for estimating the 3-D structure (density) of protein macromolecules. Unlike tomography where a large number of images of a specimen can be acquired, the number of images of an individual particle is limited because of radiation damage. Instead, the specimen consists of identical copies of the same protein macro-molecule embedded in vitreous ice at random and unknown 3-D orientations. Because the images are extremely noisy, thousands to hundreds-of-thousands of projections are needed to achieve the desired resolution of 5Å. Along with differences of the imaging modality compared to photographs, single particle reconstruction provides a unique set of challenges to existing computer vision algorithms. Here, we introduce the challenge and opportunity of reconstruction from transmission electron micrographs, and briefly describe our contributions in areas of particle detection, contrast transfer function (CTF) estimation, and initial 3-D model construction.
Sep 6 Brian Curless (University of Washington)
  Capturing Visual Experiences
  Abstract: Why do we take pictures and videos? Often, the answer is that we hope to capture moments in time, so that we can later recall and savour them once again. Digital cameras and camcorders are making it ever easier to record these moments, but often, something is lost. Photographs freeze time and space, losing the sense of motion in a scene and losing the freedom of motion available to the original viewer, and video is usually of lower resolution and finite duration, and still gives up viewpoint freedom. Furthermore, the task of sorting through the reams of image and video data that an individual records is becoming simply burdensome.

In this talk, I will describe research aimed at helping the user to better capture and re-experience the moment. One approach is to build complex hardware to acquire an immersive representation of the scene, allowing virtual flythroughs and the like. The work I will describe is far less heavy-handed - it is based on handfuls of images and simple video captures and has more modest goals of representing subtle effects like rippling water and small parallax. In fact, I argue that these subtle effects are quite powerful and can better reflect the experience available to a person observing a scene than, say, artbitrary flythroughs.

The specific projects I will present span a range of inputs and outputs. From a single photograph of a natural setting, I will show how one can add pleasing motions such as swaying branches and rippling water [2]. By taking a handful of nearby photographs or attaching a multi-lens array to a camera, I will demonstrate how small parallax and synthetic aperture effects become available to the user [3]. Panoramas can be synthesized from a set of photos with the same optical centre; I will describe how this idea can be extended to work with panning videos to create panoramic video textures [1]. Next, I will show some recent progress in combining the spatial resolution and ease of editing of photographs with the high temporal resolution of video. Finally, I will describe a new interface to video browsing that leverages the conventions of hand-drawn storyboards to enable single image illustration of video clips with intuitive spatial dragging on the summary image to explore the time axis of the video [4].

[1] Aseem Agarwala, Ke Colin Zheng, Chris Pal, Maneesh Agrawala, Michael Cohen, Brian Curless, David H. Salesin, and Richard Szeliski. Panoramic video textures. ACM Transactions on Graphics (SIGGRAPH 2005), 24(3):821827, 2005.
[2] Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, David H. Salesin, and Richard Szeliski. Animating pictures with stochastic motion textures. ACM Transactions on Graphics (SIGGRAPH 2005), 24(3):853860, 2005.
[3] Todor Georgiev, Ke Colin Zheng, Brian Curless, David H. Salesin, Shree Nayar, and Chintan Intwala. Spatio-angular resolution tradeoff in integral photography. In Proceedings of Eurographics Symposium on Rendering, 2006.
[4] Dan B Goldman, Brian Curless, David H. Salesin, and Steven M. Seitz. Schematic storyboarding for video visualization and editing. ACM Transactions on Graphics (SIGGRAPH 2006), 25(3):862871, 2006.