Affiliation:
1. Harding University, Searcy, AR
2. Microsoft Research, Silicon Valley
3. Old Dominion University
Abstract
Introduction
The web is in constant flux---new pages and Web sites appear daily, and old pages and sites disappear almost as quickly. One study estimates that about two percent of the Web disappears from its current location every week.
2
Although Web users have become accustomed to seeing the infamous "404 Not Found" page, they are more taken aback when they own, are responsible for, or have come to rely on the missing material.
Web archivists like those at the Internet Archive have responded to the Web's transience by archiving as much of it as possible, hoping to preserve snapshots of the Web for future generations.
3
Search engines have also responded by offering pages that have been cached as a result of the indexing process. These straightforward archiving and caching efforts have been used by the public in unintended ways: individuals and organizations have used them to restore their own lost Web sites.
5
To automate recovering lost Web sites, we created a Web-repository crawler named Warrick that restores lost resources from the holdings of four Web repositories: Internet Archive, Google, Live Search (now Bing), and Yahoo;
6
we refer to these Web repositories collectively as the
Web Infrastructure
(WI). We call this after-loss recovery
Lazy Preservation
(see the sidebar for more information). Warrick can only recover what is accessible to the WI, namely the crawlable Web. There are numerous resources that cannot be found in the WI: password protected content, pages without incoming links or protected by the robots exclusion protocol, and content hidden behind Flash or JavaScript interfaces. Most importantly, WI crawlers do not have access to the server-side components (for example, scripts, configuration files, databases, among others) of a Web site.
Nevertheless, upon Warrick's public release in 2005, we received many inquiries about its usage and collected a handful of anecdotes about the Web sites individuals and organizations had lost and wanted to recover. Were these Web sites representative? What types of Web resources were people losing? Given the inherent limitations of the WI, were Warrick users recovering enough material to reconstruct the site? Were these losses changing their behavior, or was the availability of cached material reinforcing a "lazy" approach to preservation?
We constructed an online survey to explore these questions and conducted a set of in-depth interviews with survey respondents to clarify the results. Potential participants were solicited by us or the Internet Archive, or they found a link to the survey from the Warrick Web site. A total of 52 participants completed the survey regarding 55 lost Web sites, and seven of the participants allowed us to follow-up with telephone or instant messaging interviews. Participants were divided into two groups:
1.
Personal loss:
Those who had lost (and tried to recover) a Web site that they had personally created, maintained or owned (34 participants who lost 37 Web sites).
2.
Third party:
Those who had recovered someone else's lost Web site (18 participants who recovered 18 Web sites).
Publisher
Association for Computing Machinery (ACM)
Reference11 articles.
1. Cox L. P. Murray C. D. and Noble B. D. Pastiche: Making backup cheap and easy. SIGOPS Operating Systems Review 36 SI (2002) 285--298. 10.1145/844128.844155 Cox L. P. Murray C. D. and Noble B. D. Pastiche: Making backup cheap and easy. SIGOPS Operating Systems Review 36 SI (2002) 285--298. 10.1145/844128.844155
2. A large-scale study of the evolution of web pages
3. Preserving the Internet
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献