While freebase is still VERY alpha, with much of the basic functionality barely working, the idea is HUGE. In many ways, freebase is the bridge between the bottom up vision of Web 2.0 collective intelligence and the more structured world of the semantic web.
Strong words, when coming from Tim O’Reilly.
Freebase is metaweb‘s first project. While it’s still in alpha and sign-up isn’t open to the wider public yet, Tim’s story gives me the impression that freebase is, in Douglas Adams’ words, a pretty neat idea. I’ll just quote some of Tim’s points that I found most remarkable, but in his article he also gives some hands-on examples of how it works and quite a few screenshots which give you a much better idea.
Metaweb has slurped in the contents of several of the web’s freely accessible databases, including much of wikipedia, and song tracks from musicbrainz. It then turns its users loose on not just adding more data items but making connections between them by filling out meta tags that categorize or otherwise connect the data items, using a typology that can be extended by users, wiki-style. (…) Much as in wikipedia, any entry listing an entity not already known creates a new page. But these entries are structured — if I add a person, they are of type person. If I add a location, they are of type location. If I add a company, they are of type company. And each of these things comes with certain relationships, and that allows other entries to be automatically updated. When Danny first showed me this application a couple of months ago, there was a really powerful example of the way this ought to work. We were looking at the tracks for Brian Eno’s album Bell Studies for the Clock of the Long Now. Danny noted that he was actually the composer of one of the tracks, 1st-14th January 07003, Hard Bells, Hillis Algorithm. Robert Cook, who was doing the demo, immediately added Danny as the composer. Then, when we went to Danny Hillis’ page, we saw that he was now listed as a composer as well as a computer scientist. (…) (…) hopefully, this narrative will give you a sense of what Metaweb is reaching for: a wikipedia like system for building the semantic web. But unlike the W3C approach to the semantic web, which starts with controlled ontologies, Metaweb adopts a folksonomy approach, in which people can add new categories (much like tags), in a messy sprawl of potentially overlapping assertions. Now, the really powerful thing about this is that all these categories, these data types and the web of fields that define them, provide new hooks for applications that will be able to extract meaning from the data. That’s what makes Metaweb a kind of semantic web application. If Metaweb gets this right, this bottom up approach will build new connections between data, new categories and ways of thinking. It will likely be messy and contradictory for a while, but as I told John Markoff for the story on Metaweb that he was preparing for the New York Times tonight, they are building new synapses for the global brain.
Now if this really takes of, and judging by Tim’s post it might very well, this really is a next generation kind of thing. Tim’s been pushing for meta data and contextual data for a long time: Machines, so his stance, should be able to read information and put it in context, which is only possible by meta data. Then you can do incredibly powerful things: Think of time stamps and geo tags for photos as the simplest application. Sensors providing all kinds of metadata would be the next step. The more meta data we add, the more accurate and efficiently we can put information in context, the better we can harness collective intelligence. Partly, this will be done by machines, partly by humans: A symbiotic human-machine interaction, as he once told me in an interview.
Anyway, I’m very excited to check it out.
Or in the words of Dave McClure, who commented on the article: “methinks the googlebase folks should be a little worried tomorrow morning…”