James Hays and Alexei Efros, a research team from Carnegie Mellon University, have developed an algorithm that will analyze the millions of images posted on photo sharing sites (like Flickr) to replace a portion of an image that may have been obstructed or you would like removed. The algorithm then sorts out the possible images to be used based on the orientation of the object, the light source, the height of the camera used to take the picture, and the color shades of the image that will best match the original.

Photo tool could fix bad images: by Mark Ward for BBC Technology News

To find suitable matching elements, the research duo’s algorithm looks through a database of 2.3 million images culled from Flickr.

“We searched for other scenes that share as closely as possible the same semantic scene data,” said Mr. Hays, who has been showing off the project at the computer graphics conference Siggraph, in San Diego.

In this sense “semantic” means composition. So a snap of a lake in the foreground, hills in the band in the middle and sunset above has, as far as the algorithm is concerned, very different “semantics” to one of the city with a river running through it.

The broad-based analysis cuts out more than 99.9% of the images in the database, said Mr. Hays. The algorithm then picks the closest 200 for further analysis.

Next the algorithm searches the 200 to see if they have the elements, such as hillsides or even buildings, the right size and colours for the hole to be filled.

The useful parts of the 20 best scenes are then cropped, added to the image being edited so the best fit can be chosen.

Early tests of the algorithm show that only 30% of the images altered with it could be spotted, said Mr. Hays.