Why do we take pictures and videos? Often, the answer is that we hope to
capture moments in time, so that we can later recall and savour them once again.
Digital cameras and camcorders are making it ever easier to record these
moments, but often, something is lost. Photographs freeze time and space, losing
the sense of motion in a scene and losing the freedom of motion available to the
original viewer, and video is usually of lower resolution and finite duration,
and still gives up viewpoint freedom. Furthermore, the task of sorting through
the reams of image and video data that an individual records is becoming simply
In this talk, I will describe research aimed
at helping the user to better capture and re-experience the moment. One approach
is to build complex hardware to acquire an immersive representation of the
scene, allowing virtual flythroughs and the like. The work I will describe is
far less heavy-handed - it is based on handfuls of images and simple video
captures and has more modest goals of representing subtle effects like rippling
water and small parallax. In fact, I argue that these subtle effects are quite
powerful and can better reflect the experience available to a person observing a
scene than, say, artbitrary flythroughs.
The specific projects I will present span a
range of inputs and outputs. From a single photograph of a natural setting, I
will show how one can add pleasing motions such as swaying branches and rippling
water . By taking a handful of nearby photographs or attaching a multi-lens
array to a camera, I will demonstrate how small parallax and synthetic aperture
effects become available to the user . Panoramas can be synthesized from a
set of photos with the same optical centre; I will describe how this idea can be
extended to work with panning videos to create panoramic video textures .
Next, I will show some recent progress in combining the spatial resolution and
ease of editing of photographs with the high temporal resolution of video.
Finally, I will describe a new interface to video browsing that leverages the
conventions of hand-drawn storyboards to enable single image illustration of
video clips with intuitive spatial dragging on the summary image to explore the
time axis of the video .
 Aseem Agarwala, Ke Colin Zheng, Chris Pal,
Maneesh Agrawala, Michael Cohen, Brian Curless, David H. Salesin, and Richard
Szeliski. Panoramic video textures. ACM Transactions on Graphics (SIGGRAPH
2005), 24(3):821827, 2005.
 Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, David H.
Salesin, and Richard Szeliski. Animating pictures with stochastic motion
textures. ACM Transactions on Graphics (SIGGRAPH 2005), 24(3):853860,
 Todor Georgiev, Ke Colin Zheng, Brian Curless, David H. Salesin, Shree Nayar,
and Chintan Intwala. Spatio-angular resolution tradeoff in integral photography.
In Proceedings of Eurographics Symposium on Rendering, 2006.
 Dan B Goldman, Brian Curless, David H. Salesin, and Steven M. Seitz.
Schematic storyboarding for video visualization and editing. ACM Transactions
on Graphics (SIGGRAPH 2006), 25(3):862871, 2006.