The scheme is to create a single page on the web for every book that has ever been published; an enormous, searchable catalogue of information about millions of books. It is still in beta, but already more than 23m books are in its system, drawing information from 19 major libraries and linking to the text of more than 1m out-of-copyright titles.
That is admirable work for just a handful of staff at the library, an arm of the non-profit Internet Archive (which itself has the vast objective of trying to keep a historical record of the web for future generations). But with information about books already being processed by hugely popular websites such as Google and Amazon, the question remains – why bother?
George Oates, the newly installed project leader, says it’s a way to preserve book records for history and, crucially, make the information usable by anybody.
“It’s remarkably difficult to unify this information,” she says, when we meet at the Internet Archive building in San Francisco’s leafy Presidio park, a former military outpost that is, rather aptly, historically preserved. “As much as the libraries attempt to have similar standards and orders, there are always gotchas and nooks and crannies that have to be worked out.”
More than simply bringing together cold lists of books from isolated libraries, however, she also believes OL can breathe life into books by grabbing information from around the internet.
“Imagine books more as a networked object, rather than a single entity,” she suggests. “We start with this kernel and then we see what we can pile onto it … it’s a locus for all the information about a book that’s on the wider web.”
(Euston_madman, no giggling =))