What is Computational Photography?

As R. Fergus indicates in the course description on Computational Photography he taught in the spring of 2010 at the University of New York (http://cs.nyu.edu/~fergus/teaching/comp_photo/index.html). Computational Photography is an new and exciting area at the intersection of Computer Graphics and Computer Vision which aims to go beyond the limitations of conventional photography, using computers, aiming to produce new and better images of the world around us. R. Rakar explained in the 2009 course on Digital Photography Computer in the MIT Media Lab (http://cameraculture.media.mit.edu/Fall2009ComputationalCamera) that the lens of a computational camera is to digitally capture the essence of visual information by exploiting the synergy of the specific optics, lighting, sensors and processing. Therefore, the aim of Computational Photography is, as explained by A. Efros in his course on Computational Photography at Carnegie Mellon University on 2008 (http://graphics.cs.cmu.edu/courses/15-463/2008_fall/) to overcome the limitations of the traditional camera by using computational techniques to produce a richer, more vivid, perhaps more perceptually meaningful representation of our visual world.

From the industry, Kodak (http://www.kodak.com) explains that computational photography combines image processing, digital sensors, optics and lighting to go beyond the traditional paradigm of image capture and provide to the acquired images of greater richness. The techniques used in computational photography range from changes in the capture system to the post-processing reconstruction. For example, changing the capture system may include spatial or temporal modulation of the aperture, modulation of the flash and repositioning of the optical path. Many of the techniques of computational photography are implemented by capturing multiple exposures, each with a set different parameters of the capture system for combine them and produce the desired final image.

Computational Photography is an area of ​​intense research nowadays. B. Hayes in his article of 2008 on Computational Photography in American Scientist (http://www.americanscientist.org/issues/pub/computational-photography) indicates that the cameras don't just capture photons; they compute pictures. He indicates that we live in a light field, that at each point of the space, light rays come from all directions and some of the techniques of computational photography work by extracting more information from this light field, information that allows us to increase the depth of field, refocus different regions of the scene, eliminate motion blurring or obtain different angular images of the same scene.

Meanwhile, M. Levoy at Stanford University presented in August 2009 the franken-camera (http://news.stanford.edu/news/2009/august31/levoy-opensource-camera-090109.html) an open source camera (whose development is funded by Nokia, Adobe Systems, Kodak, and Hewlett-Packard) and that, according to the authors, could change photogray by providing developers with the ability of modifying the camera characteristics and create new opportunities for capturing. As explained M. Levoy, ults because they’d have total control.

"Some cameras have software development kits that let you hook up a camera with a USB cable and tell it to set the exposure to this, the shutter speed to that, and take a picture, but that’s not what we’re talking about. What we’re talking about is, tell it what to do on the next microsecond in a metering algorithm or an autofocusing algorithm, or fire the flash, focus a little differently and then fire the flash again — things you can’t program a commercial camera to do."

Why is Computational Photography emerging strongly, for teaching and researching as well as development? In our opinion, is no longer a priority to increase the current consumer cameras resolution, it is necessary to provide them with new features that give added value (face and smile detection are just examples of features recently added to the cameras). Computational Photography seeks to add new functions to digital cameras. Some of these functions are implementable in today's cameras, while others require modification of the cameras. Research and development associated with some of these features, including changes in the camera to make them possible, are the objectives of this project and are described below.

Research and development areas in Computational Photography in this project

The set of techniques (features) that could be included within Computational Photography is very broad and includes: high dynamic range imaging, processing of pairs of images taken with flash and without flash, synthesis, lighting, reducing or increasing intelligently the sise of the imges, obtaining clear images by combining images with different exposure times, aperture or sensitivity, encoding exposure to eliminate motion blur, increasing the depth of field using coded aperture, obtaining the light field using programmable apertures or plenópticas cameras, etc. (see for instance [Bimber06], [Raskar09a], [RaskarTumblin10]). In this project we intend to work on the following Computational Photography issues (functionality): a) Obtaining clear images by combining images taken with different exposure times and / or different sensitivities, b) Increasing the depth of field using coded aperture, c) Obtaining the light field using programmable coded aperture.

These three problems have been chosen based on their implantation potential in the current cameras and according to the state of the art in optics and mechatronics: the first one can be done by taking several consecutive images or by using bracketing, the second one only needs to include a mask on the current lenses, the third one requires a modification of the current lenses more substantially, but it is possible, and provides a much richer information of the captured scene.