There was a very interesting plenary talk at ICRA 2009 about "Computational Cameras" given by Prof. Shree Nayar of Columbia University. A video of the plenary is included below, as well as a discussion of some of its contents -- from assorted pixel techniques for high dynamic range to flexible depth of field photography -- all very cool stuff! These developments are particularly relevant to robotics, as cameras are probably the most ubiquitous sensors encountered. This video was made available in the ICRA 2009 podcasts. While there is a large push for open-access journals / conferences, freely-available recordings of conference talks is even more lacking. As I find these more entertaining than television, I really hope this becomes a common trend (perhaps the RSS committee members are watching...?).
For those who just want to watch the video, here it is:
In essence, a computational camera can be thought of as follows (from Columbia's Computer Vision Laboratory)
If a camera can sample the light field in a way that is radically different from the traditional camera, new and useful forms of visual information can be created. This brings us to the notion of a computational camera, which embodies the convergence of the camera and the computer. It uses new optics to map rays in the light field to pixels on the detector in some unconventional fashion. In all cases, because the captured image is optically coded, interpreting it in its raw form might be difficult. However, the computational module knows everything it needs to know about the optics. Hence, it can decode the captured image to produce new types of images that could benefit a vision system - either a human observing the images or a computer vision system that analyzes the images to interpret the scene. Computational cameras can be designed to explore a variety of imaging dimensions in ways that are difficult to do with the traditional camera. The imaging dimensions include spatial resolution, temporal resolution, spectral resolution, field of view, dynamic range and depth.
Several academic and industrial research teams around the world are developing a variety of computational cameras. In addition, there are some well-established imaging techniques that naturally fall within the definition of a computational camera. A few examples are integral imaging for capturing a scene's 4D light field, coded aperture imaging for enhancing an image's signal-to-noise ratio, and wavefront coded imaging for increasing the depth of field of an imaging system. Finally, several research teams are also developing computational image detectors that perform image sensing as well as early visual processing.
Of the topics discussed by Dr. Nayar, I found the following two particularly clever -- but then again, I'm enamored by physical sensing...
We translate the detector along the optical axis during image integration. Consequently, while the detector is collecting photons for a photograph, a large range of scene depths come into and go out of focus. We demonstrate that by controlling how we translate the detector, we can manipulate the depth of field of the imaging system. In our prototype, the detector is mounted on a translation stage that is controlled by a micro-actuator.
After doing some fancy image processing (see paper) of the captured "Flexible Depth of Field" image, you get an image that has very good focus in both the foreground, background, and in between (right image) compared to a normal camera's foreground focus (left image).
There is more information on the flexible depth of field project page, including more sample images (and movies).
The basic idea is pretty simple: trade off raw resolution in RGB for higher dynamic range and multispectral response. This is done by generating a mask over the sensing element(s) that provides different levels of transparency and spectral response. Then computation is applied to reconstruct various image types (see paper).
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the trade-off between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters.
More can be found on the project page.
Those are my two favorite projects; however, the radial image reconstruction project is also pretty cool (shown below).
I think any one of these computation camera projects would be extremely useful as a robot sensor, and combined they'd be even more powerful. You can find out more about all the projects discussed (and more) at Columbia University's Computer Vision Laboratory website.