Friday, August 1st, 2014

Computational Photography

0

 January 24, 2012, SPIE, Electronic Imaging Conference, Burlingame, CA—William Freeman from the Massachusetts Institute of Technology talked about computational photography. Today's digital cameras have only replaced the imaging system, using a semiconductor imager versus film. Most of the camera functions are still analog.

In computational photography the user can modify any or all of the components: optics, sensors, and computer. These functions can be adjusted after the exposure in a fully digital camera, and allow the user to adjust the optical variables after the fact. The addition of computer-based enhancements like auto-focus and anti-shake are there to overcome the limitations of the analog model.


Camera Cutaway model



For large depth of field, the imaging can use wavefront coding. A faceplate defocuses the image with identical point spread. The sensor and post processing then deblur the resulting image.

 

In a typical camera, the depth of field is a function of focal length and aperture. Encoding the aperture data and using a non-circular array of openings allows in-camera modification of depth of field. Changing the size or shape of the openings changes the depth of field. This aperture change is unlike conventional depth determinations. In conventional cameras, it is hard to determine the depth of field in a non-focal plane image. This depth determination makes it hard to correct the scale of the image to correct it.

In comparison, the aperture is coded in a computational camera. This coding reduces the uncertainty in identifying the scale for corrections. The depth extraction is taken from the diffuse image data and the deconvolution of the diffuse image comes from the optimized aperture. The process has a prior data constraint, that the resulting image has to look like a picture. As a result, the data has to be regularized after processing to get the final image. The codes for the aperture can cause an artifact with the wrong diffusion scale, so the processing has to use some parts of the image to give the image and depth.

Motion deblur is another factor that can be eliminated by specialized processing. Blur is defined as the convolution of the motion and the image. It is hard to estimate and hard to correct. One technique is to use a flutter shutter, a fast opening and closing shutter with random opening times. These shutters are designed for a non-zero response in the Fourier transform.

Measuring the motion properties is equivalent to a light source modulation. The image is illuminated by direct and global light sources. The global sources are internal, global sources, volumetric, and diffusion. All show a low frequency response and are usually characterized as "soft" light. The high-frequency lighting is patterned and alternating sections can be subtracted to set the depth and global illumination.

Direct lighting is like the older CGI, where all the lighting is from the front. Global is more akin to ray tracing where light comes from many positions. Volumetric scattering comes from light reflecting from internal surfaces. Some examples of volumetric scattering are bell peppers and skin. Both have significant sub-surface scattering and get their appearance from the indirect light.

To address motion blur, one technique is to use light projection in combination with normal lighting. A flash has a range of over-brightness, and other areas of no detail. By blending the flash and non-flash images, you can arrive at a simple set of images that can be resolved with a phase shift function. If the image sensor is moved in a parabolic curve, the combined images from the flutter shutter and the sensor motion produce a series of stripes that can be processed with a known impulse response to get a still image. The wavefront coding over time and not space produces an image with no motion blur. This blur reduction only works in one dimension, and is very hard to do in 2-D.

Another way to mange images is to have a sensor capture all light rays passing through the lens. A micro-lens array trades special resolution from a portion of the lens for ray directionality. The sub-pixels contain all the light in the filed of view and provide a 4-D data set. The 2-D light and the 2-D directionality allows the user to digitally refocus on any portion of the light field.

Finally, there is work in image restoration. This work starts from collections of images and statistical models of those images to detect sharpness. The resulting data provide a signal processing model of sharp images. The data show that the normal image gradients for a sharp image has as sharp peak in the response curve while a blurred image has a bell-curve distribution.

There are three sources of information to sharpen an image. The extracted image and blur provide most of the input, prior data from the database, and some blur trials. The post capture processing magnifies motion relative to a zero phase.

Changing the camera from a digitally-assisted analog device to a fully digital one creates a new design space. The problems involved with fixed apertures, focal lengths, and single-shutter capture are changed to manipulations on a computer in a post-processing mode. This change will allow the photographer to find the best image.

Speak Your Mind

Tell us what you're thinking...
and oh, if you want a pic to show with your comment, go get a gravatar!