|Motion deblur of images||- Motion Deblur|
Image taken of Gatwick Airport at 07:30 on 4th January 2008
The blurring of images due to motion of the camera on a moving platform is an important issue since a low flying Unmanned Aircraft will need to take a series of images in rapid succession to build up an overall picture of an area. The deblur algorithm is complicated by the low level imaging of objects of different proximity to the moving camera.
We test a Nikon D40X body and a 18-200mm f/3.5-5.6G IF-ED AF-S VR DX Zoom-NIKKOR lens. The ISO value will be set at the highest supported value of 1,600. The lens has a Hoya circular polarizing filter on it to reduce the reflections from windows.
The 18-200mm f/3.5-5.6G IF-ED AF-S VR DX Zoom-NIKKOR zoom telephoto lens.
The Nikon D40X with the 18 mm - 200 mm lens set at a focal length of 18 mm
The Nikon D40X with the 18 mm - 200 mm lens set at a focal length of 200 mm
These photographs were taken from the 15:00 train from Waterloo Station in London to Havant on Tuesday 20th November, 2007, using the Nikon D40X digital SLR camera with the 18 - 200 mm zoom lens.
Due to the low light level resulting from the overcast sky in the late afternoon, the exposure lengths are long resulting in considerable image blurring.
There is also quite a significant amount of reflection from the windows, even though we are using a circular polariser in front of the lens...
August 30, 2006 - Following this month’s 33rd Annual Siggraph Conference in Boston, MA, a research team at Mitsubishi Electric is catching the attention of camera manufacturers for their photo motion deblurring technology, called a flutter shutter camera.
The flutter shutter camera is a modified camera that can capture moving objects at an exposure time of over 50 milliseconds, like high speed motion cameras. Using a coded exposure sequence, the new flutter shutter camera could recover text from a speeding car and sharpen images, according to the researchers.
Introduced in early August, three Mitsubishi Electric researchers presented the abstract, “Coded Exposure Photography: Motion Deblurring using Fluttered Shutter” at the largest computer and graphics conference, Siggraph. After one year of research development, Mitsubishi Electric Research Lab (MERL) senior researcher Ramesh Raskar, MERL visiting researcher Amit Agrawal, and Northwestern University computer science assistant professor Jack Tumblin launched the new prototype with the goal of deblurring photos.
The prototype is made with an 8 megapixel Canon PowerShot Pro1, although it could be applied to any camera. Instead of leaving the shutter open during one exposure duration, the camera’s attached lens filter flutters the shutter multiple times during a single exposure, based on a carefully chosen binary sequence.
Raskar, who celebrates his sixth year at Mitsubishi this month, woke up one day with the idea, according to the researcher. Raskar said the fluttered shutter method was “so simple” and wondered if it could work.
Above: Mitsubishi Electric Research Lab (MERL) senior researcher Ramesh Raskar with a flutter shutter prototype
“We have UV filters. We have polarizing filters. What about time filters?” said senior research scientist Ramesh Raskar at Mitsubishi Electric. Just a few weeks ago, Raskar and the Mitsubishi team saw the dream actualized with its official introduction. This time filter is made up of an external ferro-electric shutter that flutters based on a rapid binary sequence.
Traditional cameras typically have a single shuttered exposure in which moving subjects result in blurry images. On standard cameras, object motion can be described by convolution of a sharp image with a temporal box filter, according to researchers, thus destroying high frequency spatial details in the image. The coded exposure camera, on the other hand, uses a method called deconvolution, in which the convolution filter changes to a broadband filter and preserves spatial frequencies and image detail.
A post-capture linear system algorithm is then applied to recover image sharpness. The algorithm is “fundamentally different from other deblurring algorithms,” stated authors of the abstract. The algorithm, Ax=b (a Matlab Code), is the simplest deconvolution algorithm possible, according to researchers. Unlike Photoshop or other deblurring methods, this deconvolution does not result in ringing or deconvolution artifacts, halo-like distortions on the image.
In light of a deblurring technology released by Massachusetts Institute of Technology at Siggraph, Mitsubishi has spoken with the group led by principal scientist Rob Fergus of MIT. [For more information, refer to http://www.digitalcamerainfo.com/content/Photo-deblurring-Research-Debuts-at-Siggraph-Conference-.htm. The “best solution,” as Raskar states, is if the two Cambridge, MA research teams could combine anti-shake and motion deblurring methods.
Based on the abstract “Removing Camera Shake from a Single Photograph,” the MIT method handles hand-camera shake. The coded exposure camera method, on the other hand, can correct extreme amounts of motion blur from moving objects, according to Raskar. “There has not been a photo in the world that has that much blur and has been brought back to recognition,” said Raskar about the resulting images from the fluttered shutter camera.
The fluttered shutter camera and algorithm can handle such extreme blurs that it can retrieve details from moving cars. With possible applications in law enforcement, the police could record a plate number from a speeding vehicle using the fluttered shutter camera.
The camera can even capture images from a flying plane, according to Raskar. With large-scale aerial photography applications, such as Google Maps, the camera can use an aperture ten times smaller than currently used on the expensive aerial cameras. For planes mapping the entire earth, this would reduce time and camera costs significantly.
Some may wonder why not use a short shutter speed, such as those on action and sport modes for shooting fast-moving objects. The difference, as Raskar explains, the action-shot shutter speed needs a large aperture, whereas the fluttered shutter camera does not.
The fluttered shutter could be implemented on any “off-the-shelf camera” with only a slight modification, according to authors of the abstract. The universal fluttered shutter also could be installed internally within the camera, making the external lens filter unnecessary. Although the Mitsubishi team is consulting with camera manufacturers about future plans, Raskar anticipates it will take a few years before photo enthusiasts will see the technology in consumer cameras. While the prototype costs roughly $500, the researcher predicts the equipment for commercial cameras would only cost tens of dollars when it does become commercialized. “It always takes time to go into consumer photography,” said Raskar. The Mitsubishi Electric team is also working on a project for correcting out-of-focus blur, although Raskar could not disclose details.For additional information, go to:
Further information at: http://www.merl.com/areas/deblur & http://www.cfar.umd.edu/~aagrawal/sig06/sig06.html
Unfortunately, there are some problems with the use of ferroelectric liquid crystals...
Conside the FE-1 goggles, shown above, from Cambridge Research Systems Limited, that use a large area ferroelectric liquid crystal:
In the hybrid camera based motion deblurring system developed by Moshe Ben-Ezra
and Shree K. Nayar in the above mentioned paper, a high frame rate, relatively
low resoltuion camera (here a Sony DV camcorder) was used with a lower frame
rate, higher resolution camera (here a 3 Mpixel Sony Coolpix camera). The images
from the high frame rate camera could be used to determine the movement of the
centre of the camera during exposure: i.e. identify the localised Point Source
high resolution image of a building, with considerable blur of the letters
The complexity of the Point Source Function, showing energy distribution and
camera wander during exposure
The de-blurred image, where the PSV information was used in the image de-blur
"Ground truth image" obtained using the high resolution camera in a stabilised
Another example of a Point Source Function: a night photograph of Gatwick
Airport. What is not obvious from this photograph is how long the camera was
pointing at each pixel location, something that is colour coded in the PSV in
the paper by Ben-Moshe et al.
high resolution image of a building, with considerable blur of the letters
The complexity of the Point Source Function, showing energy distribution and camera wander during exposure
The de-blurred image, where the PSV information was used in the image de-blur action
"Ground truth image" obtained using the high resolution camera in a stabilised tripod
Another example of a Point Source Function: a night photograph of Gatwick Airport. What is not obvious from this photograph is how long the camera was pointing at each pixel location, something that is colour coded in the PSV in the paper by Ben-Moshe et al.