“That the universal bookstore-cum-library failed is, to me, a sadness,” he says. And print-disabled users can use assistive technologies to read scanned books that might otherwise be difficult if not impossible to find in accessible formats.Ĭourant and others involved in the early days of the scanning work acknowledge both the benefits and the shortfalls. Through the HathiTrust Research Center, scholars can tap into the Google Books corpus and conduct computational analysis-looking for patterns in large amounts of text, for instance-without breaching copyright. That rich resource has been put to several good uses. The rest come from the Internet Archive’s ongoing scanning work and local digitization efforts, according to Furlough. Taking into account multi-volume journals and duplicate copies, that’s about 8 million unique items, about 95 percent of them from Google’s scanning. It now contains more than 15.7 million volumes. Established in 2008 and based at the University of Michigan, it has grown to include 128 member institutions, according to its executive director, Mike Furlough. That material helped stock a partnership called the the HathiTrust Digital Library. As part of the deal, Google’s partner libraries made sure they got to keep digital copies of their scanned works for research and preservation use. Google’s scanning project helped establish some important nodes in what’s become an ever-expanding web of networked research. “But I think this was an amazing effort which has had lasting consequences, most of them positive.” “I’m not a fan of everything Google, by any means,” Courant says now. Courant was also interim provost at Michigan when Google first approached the university about scanning the contents of its library-a proposal that left him both “ecstatic and skeptical,” he says. “It’s hard to imagine going through a day doing the work we academics do without touching something that wouldn’t be there without Google Book Search,” says Paul Courant, now interim provost and executive vice president for academic affairs at the University of Michigan. It’s also a handy resource for other kinds of research. It’s a pillar of the humanities’ growing engagement with Big Data. In fact, academics now regularly tap into the reservoir of digitized material that Google helped create, using it as a dataset they can query, even if they can’t consume full texts. That assessment may be technically true, but many librarians and scholars see the legacy of the project differently. And though the same judge ultimately dismissed the case in 2013, handing Google a victory that allowed it to keep on scanning, the dream of easy and full access to all those works remains just that.įor more surprising stories at the intersection of tech and education, subscribe to the EdSurge Podcast, a weekly look at how education is changing.Įarlier this year, an article in the Atlantic lamented the dismantling of what it called “the greatest humanistic project of our time.” The author, a programmer named James Somer, put it like this: “Somewhere at Google there is a database containing 25 million books and nobody is allowed to read them.” A settlement that would have created a Book Rights Registry and made it possible to access the Google Books corpus through public-library terminals ultimately died, rejected by a federal judge in 2011. An epic legal battle between authors and publishers and the internet giant over alleged copyright violations dragged on for years. It got part of the way there, digitizing at least 25 million books from major university libraries.īut the promised library of everything hasn’t come into being. That’s what Google dreamed of doing when it embarked on its ambitious book-digitizing project in 2002. It was a crazy idea: Take the bulk of the world’s books, scan them, and create a monumental digital library for all to access.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |