For years, researchers have struggled to understand how to extract and use information in an individual image to determine how far objects are from the focus distance, with many believing it can only be accomplished by human and animal visual systems.
Now, however, University of Texas at Austin researchers believe they have found a way of recreating the process of focusing without trial and error, as was previously the case.
Johannes Burge, a postdoctoral fellow in the College of Liberal Arts" Center for Perceptual Systems and co-author of the study, explained that, until now, a camera has resembled a human eye with its auto-focusing system, but human auto-focusing rarely makes mistakes, unlike cameras hence the blurry images people often see through the lens.
He stated that the study is significant because a statistical algorithm can now determine focus error, which indicates how much a lens needs to be refocused to make the image sharp, from a single image, without trial and error.
"Our research on defocus estimation could deepen our understanding of human depth perception," Dr Burge said.
He noted that the results could also improve auto-focusing in digital cameras, as the team used basic optical modelling and well-understood statistics to show that there is information in some images that cameras have yet to tap.
Wilson Geisler, director of the Center for Perceptual Systems and co-author of the study, said that experts are now one step closer to understanding how humans use "defocus blur" to both estimate depth and refocus their eyes, as well as how many small animals use defocus as their primary depth cue.
"The pattern of blur introduced by focus errors, along with the statistical regularities of natural images, makes this possible," he added.
Further studies will now put these findings to the test and could lead to new innovations in the fields of both ocular research and camera development.
by Martin Burns