Making Sense of the World, One Photo at a Time Mor Naaman, Yahoo! Research Berkeley The availability of map interfaces and location-aware devices makes a growing amount of unstructured, geo-referenced information available on the Web. This type of information, in aggregate form, can help understand data trends and features. In particular, over ten million geo-referenced photos are now available on the photo-sharing website Flickr. These photos are often associated with user-entered unstructured text labels (i.e., tags) -- the first major collection of its kind. In this talk, I will discuss two approaches for extraction of information (knowledge, if you will) from this metadata-rich yet unstructured set of photos. The first approach is location-driven, using the dataset to extract representative tags for each map region and zoom level. An alternative approach takes a tag-centric view, and attempts to extract place/event semantics for each tag by analyzing the usage patterns of the tag in location and time. Finally, I will describe and demonstrate two prototype applications that make use of the extracted knowledge, ZoneTag and TagMaps, both available from the Y!RB web site at http://whyrb.com.