I make fun of my buddy Bill
for workin’ there, but those guys at Microsoft are all right sometimes. Take, for example, Photosynth — their collaboritive project with University of Washington (watch theTED presenation first, then head to the actual Photosynth site).
What an amazing application! Can you imagine what it would be like to be a young geography student with an application like this? An architectural student? A history student looking at a much-photographed moment of history?
The one thing I’d be interested to see, based on their reconstructions of real-world, 3-D images based on virtual-world, 2-D images (Flikr, in this case), is how the metadata of “time” would be considered.
Let’s say there’s a Photosynth approximation of Big Ben. But four months from now, the clockface of Big Ben is destroyed by a meteorite, and stands defaced for 18 months while Parliament allocates the funds to fix it.
In a case like that, how do you handle the metadata entries of time when reconstructing a public place by communal photographs? I assume that Photosynth looks for shared data — individual pixel sets that are shared in two disparate photos — and then builds out from there. No problem there. But what happens when an event fundamentally changes an object? From which moment in time do you draw your photos?
Some photos will have been taken with the old clockface, some will be taken during the unrepaired period, and others during the post-reconstruction period. Will you end up with a Platonic Ideal of photos — a Big Ben as it is best remembered (and best viewed) from all angles? …an image of Big Ben that ignores any temporary damage, spray-paint protests, etc? Or will you have to create different image sets for different Big Bens in time?