On Thursday November 17th 2005, there was a debate at the New York Public Library about Google Book Search (formally known as Google Print). You can read more on that here.
One of the two debaters in favor of Google, Lawrence Lessig, has created a video outlining his argument and the opposition’s argument. You can see that video here (about 30 minutes).
After watching this video, it seems to me that there is a fundamental question that needs to be addressed. Will Google Book Search be used as a substitute for the original works? My favorite quote (and others have agreed) is this one from John Battelle’s search blog:
Mr. Adler (AAP lawyer) said Google’s contention that its search program might somehow increase sales of books was speculation at best.
“When people make inquiries using Google’s search engine and they come up with references to books, they are just as likely to come to this fine institution to look up those references as they are to buy them,” he said, referring to the Public Library.
To which Google’s Mr. Drummond replied, “Horrors.”
Now, I don’t know if Mr. Drummond was avoiding the comment or what, but it’s crucial. If people will use Google Book Search as an alternative to buying the books that are under copyright, there is something fundamentally unfair with the Book Search program.
Google Book Search is a lot like Google’s other search services, like its Image Search and even its Web Search. Some have argued that Google is becoming just that: an answer engine that gives people that which they seek, as opposed to a search engine that tells people where to find that which they seek.
On January 9th 2006, Jakob Nielsen blogged about the idea that,
“people have begun using search engines as answer engines to directly access what they want — often without truly engaging with the websites that provide (and pay for) the services.”
That very same day, Danny Sullivan responded to Jakob Nielsen’s blog post, arguing that search engines, especially Google’s, drive large amounts of traffic to sites. Danny wrote:
If suddenly every site on the web were to block Google from indexing them, Google would have a crisis in short order. Its main “content” would have gone away, and the ads alone aren’t going to keep attracting searchers.
Web site owners have not done that, however. That’s because by and large, they’ve found that search engines drive more traffic to them than they cost in terms of bandwidth of being indexed.
WebmasterWorld has become a classic case study of this. Google and other search engines were banned in November along with “rogue” spiders, because somewhat similar to Jakob’s “leech” metaphor, they were seen to have been sucking down more bandwidth than it was worth supporting.
WebmasterWorld founder Brett Tabke was often quoted saying he had the best sleep in months after blocking the spiders. His sleep may have improved, but what to do about the major spiders didn’t go away. By the end of December, Brett had done a 180 degree turn and let the major spiders back in.
It’s interesting to see more content on the debate and struggle that Google is going through. I look forward to reading a response from Jakob Nielsen.
What do you think?
UPDATE: Brian Dear responded to Lessig’s video here.