Computational Cameras: Exploiting Megapixels and Computers to Redefine the Modern Camera

Computational Cameras

There was a very interesting plenary talk at ICRA 2009 about "Computational Cameras" given by Prof. Shree Nayar of Columbia University.  A video of the plenary is included below, as well as a discussion of some of its contents -- from assorted pixel techniques for high dynamic range to flexible depth of field photography -- all very cool stuff!  These developments are particularly relevant to robotics, as cameras are probably the most ubiquitous sensors encountered.  This video was made available in the ICRA 2009 podcasts.  While there is a large push for open-access journals / conferences, freely-available recordings of conference talks is even more lacking.  As I find these more entertaining than television, I really hope this becomes a common trend (perhaps the RSS committee members are watching...?).

For those who just want to watch the video, here it is: 

 

In essence, a computational camera can be thought of as follows (from Columbia's Computer Vision Laboratory

Computational Camera

If a camera can sample the light field in a way that is radically different from the traditional camera, new and useful forms of visual information can be created. This brings us to the notion of a computational camera, which embodies the convergence of the camera and the computer. It uses new optics to map rays in the light field to pixels on the detector in some unconventional fashion. In all cases, because the captured image is optically coded, interpreting it in its raw form might be difficult. However, the computational module knows everything it needs to know about the optics. Hence, it can decode the captured image to produce new types of images that could benefit a vision system - either a human observing the images or a computer vision system that analyzes the images to interpret the scene. Computational cameras can be designed to explore a variety of imaging dimensions in ways that are difficult to do with the traditional camera. The imaging dimensions include spatial resolution, temporal resolution, spectral resolution, field of view, dynamic range and depth.

Several academic and industrial research teams around the world are developing a variety of computational cameras. In addition, there are some well-established imaging techniques that naturally fall within the definition of a computational camera. A few examples are integral imaging for capturing a scene's 4D light field, coded aperture imaging for enhancing an image's signal-to-noise ratio, and wavefront coded imaging for increasing the depth of field of an imaging system. Finally, several research teams are also developing computational image detectors that perform image sensing as well as early visual processing.

Of the topics discussed by Dr. Nayar, I found the following two particularly clever -- but then again, I'm enamored by physical sensing...

 

Flexible Depth of Field:

 

Flexible Depth of Field   Flexible Depth of Field Prototype

We translate the detector along the optical axis during image integration. Consequently, while the detector is collecting photons for a photograph, a large range of scene depths come into and go out of focus. We demonstrate that by controlling how we translate the detector, we can manipulate the depth of field of the imaging system.  In our prototype, the detector is mounted on a translation stage that is controlled by a micro-actuator.

After doing some fancy image processing (see paper) of the captured "Flexible Depth of Field" image, you get an image that has very good focus in both the foreground, background, and in between (right image) compared to a normal camera's foreground focus (left image).

Flexible Depth of Field Results 

There is more information on the flexible depth of field project page, including more sample images (and movies).

 

Generalized Assorted Pixel Camera: 

 

The basic idea is pretty simple: trade off raw resolution in RGB for higher dynamic range and multispectral response.  This is done by generating a mask over the sensing element(s) that provides different levels of transparency and spectral response.  Then computation is applied to reconstruct various image types (see paper).

Generalized Assorted Pixel Camera
 

We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the trade-off between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters.

More can be found on the project page.

 

Those are my two favorite projects; however, the radial image reconstruction project is also pretty cool (shown below).

Radial Imaging Systems

I think any one of these computation camera projects would be extremely useful as a robot sensor, and combined they'd be even more powerful.  You can find out more about all the projects discussed (and more) at Columbia University's Computer Vision Laboratory website

 

Comments

This is awsome work! I am surprised no comments are here already. Really impressive. Will try and work with some of your ideas.

Cheers!

—SD

There's been some excitement this week in the world of computational cameras.  A new company called Lytro, which received $50M in venture funding back in June 2011, is available for (pre)order at just $400 (announcement on TC).

Lytro Camera   Lytro Camera

The Lytro camera uses a pretty standard front-end lens arrangement, but then uses an array of micro-lenses right before the imaging element (circled in red) to capture a "light field image" that contains information about millions of light rays -- not at any particular depth of focus. 

Lytro Camera Light Field Sensor

Using DSP algorithms and the "light field image", they're able to compute any number of "standard" images at any depth of field, so that you can programmatically adjust the focus of the image later:

Lytro Camera Focus Adjust Lytro Camera Focus Adjust

On the Lytro website, they have a fun interactive application that lets you change the focus on a number of images; check it out.  If you want to learn more about the technical details behind this computational camera, this old article, "Light Field Photography with a Hand-held Plenoptic Camera," might be helpful.

In any case, this will certainly help noob photographers -- like me!

 

—Travis Deyle

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <p>
  • HTML tags will be transformed to conform to HTML standards.

More information about formatting options