MattSolar.com
Early stage startup marketing & community management + the great outdoors.

Computer Vision & 3D Modeling via Photos

After watching Clay Shirkley’s TED Talk presentation last week I had to watch my all time favorite TED Talk; Blaise Aguera y Arcas demos Photosynth.

This was the first TED Talk that really captured my interest. Photosynth has the ability to identify the content within a picture and overlap it with similar pictures. The end result is a 3D rendering of any highly photographed object.

I hadn’t seen any other high profile blog posts on the topic then Google announced today a research paper to introduce similar technology across 50,000 landmarks world wide;

Our research builds on the vast number of images on the web, the ability to search those images, and advances in object recognition and clustering techniques. First, we generated a list of landmarks relying on two sources: 40 million GPS-tagged photos (from Picasa and Panoramio) and online tour guide webpages. Next, we found candidate images for each landmark using these sources and Google Image Search, which we then “pruned” using efficient image matching and unsupervised clustering techniques. Finally, we developed a highly efficient indexing system for fast image recognition.

Additional details from Google can be found here.

I’m really intrigued to see where this goes in the next few years. With the sheer volume of data that is being loaded to sites like Facebook, MySpace, Flickr, and YouTube (I imagine Photosynth will soon be able to extract content from video) this tool has a lot of potential to be incredibly powerful.  Of course, it also brings up some privacy concerns.  How long until there is a public 3D display of my house in Street View?  Will I be automatically tagged using facial recognition in any picture I’m in (even if I’m a bystander in a crowd)?

Leave a comment

Your email address will not be published. Required fields are marked *

One thought on “Computer Vision & 3D Modeling via Photos”