Google: The Anatomy of a Large Scale Web Search Engine

Abstract

Global search engines are an integral part of the World Wide Web. They have made the rapid growth of the web possible by allowing users to find web pages relevant to their interests in a sea of information.

However, to engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges because of the demands of uncontrolled hypertext collections which are only starting to be met.

Engineering a web search engine has many technical challenges. Web search engines are very different from traditional search engines in that they operate on hypertext. Therefore, they have to deal with crawling and can make use of links for searching. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. XXXWHY?

In this paper, we present Google, a prototype of a scalable search engine. Google is designed to crawl and index the Web efficiently, using limited disk storage. To address scalability in search, Google makes use of hypertextual information found in links to produce higher quality results.

1. Introduction

Since 1993, the World Wide Web has grown incredibly by almost any measure, from 130 web servers in June 1993 to 650,000 web servers in January 1997 [?] serving 100 million web pages. By making all of this information easy for users to locate, search engines have played a critical role in allowing the Web to scale to its present size. In this paper, we address the scalability of search engines in terms of both performance and search quality.

1.1 Web Search Engines: 1994 - 2000

Search engine technology has had to scale dramatically to keep up with the growth of the web. One of the first web search engines, the World Wide Web Worm (WWWW) [McBryan] had an index of 110,000 web pages and web accessible documents in 1994. As of November, 1997, the top search engines claim to index from 2 million (WebCrawler) to 100 million web documents [www.searchenginewatch.com]. It is foreseeable that by the year 2000, a comprehensive index of the WWW will contain over a billion documents. At the same time, the number of queries search engines handle has grown incredibly too. In March and April 1994, the World Wide Web Worm received an average of about 1500 queries per day. In November 1997, Altavista claimed it handled roughly 20 million queries per day. With the increasing number of users on the web, and automated systems which query search engines, it is likely that top search engines will handle hundreds of millions of queries per day by the year 2000. In this paper, we present the Google search engine developed at Stanford. The goal of Google is to address many of the problems introduced by scaling search engine technology to such extraordinary numbers.

1.2. Scaling with the Web

Creating a search engine which scales even to today's web presents many challenges. First, there are the performance considerations. It is necessary to have fast crawling technology to gather the web documents and keep them up to date. Storage space must be used efficiently to store indices and, optionally, the documents themselves. The indexing system must process hundreds of gigabytes of data efficiently. Queries must be handled quickly, at a rate of hundreds to thousands per second.

All of these tasks are becoming increasingly difficult as the Web grows. However, hardware performance and cost has also been improving dramatically to partially offset the difficulty. Disk cost has fallen to below $50 per gigabyte with transfer rates close to 10MB per second. Memory prices are below $5 per megabyte and 300MHz CPUs are available for little cost. There are, however, several notable exceptions this progress. Disk seek times have remained fairly high at about 10 ms because of the phyisical limitations of moving a disk head. To illustrate, it is now possible to read 100K in roughly the same time as performing one disk seek. Another problem is that operating system robustness still leaves much to be desired; operations on gigabytes of data often corrupt small portions of the data.

In designing Google, we have considered both the rate of growth of the WWW and technological changes. Google is designed to be scalable to extremely large data sets. It makes efficient use of storage space to store the index. Google also stores all of the actual documents it crawls in compressed form to allow it to highlight search matches, and support varied uses and research such as data mining [dynamic data mining reference]. One of our main goals in designing google was to set up an enviornment where other researchers can come in quickly and process large chunks of the web and produce interesting results that would have been very difficult otherwise. Indeed, our PageRank algorithm, mentioned below, would have been very difficult to build and evaluate with access to large chunks of the link structure of the web.

1.3 Quality of Search

Aside from performance, the growing number of web documents introduces quality issues. In 1994, some people believed that a complete search index would make it possible to find anything easily. According to [?], ``The best navigation service should make it easy to find almost anything on the Web (once all the data is entered).'' However, the WWW of 1997 is quite different. Anyone who has used a search engine recently, can readily testify that the completeness of the index is not the only factor in the quality search results. ``Junk results'' often wash out any results that a user is interested in. In fact, as of November 1997, only one of the top four commercial search engines finds itself (returns its own search page in response to its name in the top ten results)  There are no clear solutions to this problem. However, there is some optimism in the use of more hypertextual information rather than relying on only the text of documents [two papers from www 97]. In particular, link structure [?] and link text provide a lot of information for making relevance judgements and quality filtering. Google makes use of both link structure and anchor text (see Sections ?? and ??).

Academic vs. Commercial Search Engines

Aside from tremendous growth, WWW has also become increasingly commercial over time. In 1993, 1.5% of web servers were on .com domains. This number grew to over 60% in 1997. At the same time, search engines have migrated from the academic domainto the commercial. Currently most search engine development has gone on at companies with little publication of technical details. This causes search engine technology to remain largely a black art and to be advertising oriented (see Section ?). With Google, our goal is to push more development and understanding into the academic realm.

2. System Features

The Google search engine has several features that help it produce better quality results. First, it makes use of link structure of the Web to calculate a quality ranking for each web page. This ranking is called PageRank [?]. Second, Google treats link text specially for search purposes.

2.1 PageRank

XXXthis needs to be expanded The PageRank of a web page is a rough measure of the page's importance. Roughly speaking, a page is important if there are many pages that point to it or if the pages that point to it are important. In order to calculate page rank, the entire link structure of the Web (or the portion that has been crawled) is used.

2.2 Anchor Text

The text of links is treated in a special way in our search engine. Normally, the text of a link gets associated with the page that the link is on. Instead, we associate it with both the page the link is on and the page the link points to. This has several advantages. First, anchors often provide more accurate desciptions of web pages than the pages themselves (see Section ??). Second, anchors may exist for documents which cannot be indexed by a text- based search engine such as images, programs, and databases. Third, this makes it possible to return web pages which have not actually been crawled. Note, there are problems with this approach, since pages are never checked for validity before being returned to the user. Because of this, it is possible that the search engine will return a page that never actually exisited, but had hyperlinks pointing to it. However, because of the way that results are sorted, this problem happens rarely.

This idea of propegating anchor text to the page it refers to was implemented in the World Wide Web Worm [refXXXX] especially because it helps search non-text information, and expands the search coverage with fewer downloaded documents. We use it mostly for the first reason which is that anchor text can help provide better quality results. Using anchor text efficiently is technically difficult because of the large amounts of data which must be processed.  In our current crawl of 26 million pages, we had over 259 million anchors which we indexed. XXXRANKDEX CITE?

2.3 Other Features

Aside from PageRank and the use of anchor text, Google has several other features. First, it has location information for all hits and so it makes extensive use of proximity in search [cite craig?]. Second, Google keeps track of some visual presentation details such as font size of words. Words in a larger or bolder font are weighted higher than other words. Third, full raw HTML of pages is available in a repository.

3 Related Work

Work in information retrieval systems goes back many years and is well developed. However, most of the research on information retrieval systems is on small well controlled homogenous collections such as collections of scientific papers or news stories on a related topic. Indeed, the primary benchmark for information retrieval, TREC [cite], is a farily small, well controlled collection. Things that work well on TREC often do not produce good results on the web. For example, the standard vector space model tries to return the document that most closes aproximates the query, given that both query and document are vectors defined by their word occurance. On the web, this strategy often returns very short documents that are the query plus a few words (many commercial search engines have this problem). For example, we have seen a major search engine return a page containing only "Bill Clinton Sucks" and picture from a "Bill Clinton" query. Some information retrieval experts would argue that on the web, users should specify more accurately what they want and add more words to their query. We disagree vehemently with this position. If a user issues a query like "Bill Clinton" they should get reasonable results since there is a enoumous amount of high quality information available on this topic. Given examples like these, we belive that the standard information retrieval work needs to be extended to deal effectively with the web.

The web is a vast collection of completely uncontrolled heterogenous documents. Documents on the web have extreme variation internal to the documents, and also in the external meta information that might be available. For example, documents differ internally in their language (both human and programming), vocabulary (email addresses, links, zip codes, phone numbers, product numbers), type or format (text, HTML, PDF, images, sounds), and may even be machine generated (log files or output from a database). On the other hand, we define external meta infomation as information that can be inferred about a document, but is not contained within it. Examples of external meta information include things like reputation of the source, update frequency, quality, popularity or usage, and citations. Not only are the possible sources of external meta information varied, but the things that are being measured vary many orders of magnitude as well. For example, compare the usage information from a major home page, like Yahoo's which currently recieves XX million page views every day with an obscure technical article which might recieve one view per week, differing by eight orders of magnitude.

Another big difference between the web and traditional information retrieval systems is that there is virtually no control over what people can put on the web. Couple this flexibilty to publish anything with the enourmous influence of search engines to route traffic and "fooling" or spamming search engines becomes a serious problem. There is enourmous economic incentive to mislead search engines. If you can convince a search engine that your page should be returned for a popular query, even if you page has nothing to do with that query, you can reap enoumous economic benefit becase a large number of people will see your page; it is like free advertising. Because of this economic feedback system, people are willing to spend huge amounts of time tailoring their pages for search engines so they come up high in the results for important search terms. If the search engine keeps its index up to date, the problem is worsened since people have more feedback to adjust their pages. Finally, it is hard to keep track of offending parties, since they can eaisily create a new identitiy on the net. This "spamming" or misleading of search engines is a problem that has not been addressed at all in traditional information retrieval systems. Also, it is interesting to note that metadata efforts have largely failed with web search engines, because any text on the page which is not directly represented to the user is abused to "spam" search engines.

Search research on the web has a short and consise history. The WWWW was one of the first web search engines. It was subsequently followed by WebCrawler, Lycos, Altavista, Infoseek, Excite, HotBot (Inktome), and others. Of these more recent search engines, little has been published [Mauldin] [Pinkerton]. Compared to the growth of the Web and the importance of search engines there are precious few documents about these search engines. According to Michael Mauldin (chief scientist, Lycos Inc) [Mauldin],, "the various services (including Lycos) closely guard the details of these databases". However, there has been a fair amount of work on specific features of search engines. Especialy well represented is work which can get results by post-processing the results of existing commercial search engines, or produce small scale "individualized" search engines. However, we have found the most promising applications lie in processing large amounts of the web, mining for different types of data than people traditionally look at. (NetEliza? Take advantage of precision more than recall)

4 System Overview

A web search engine must perform several major functions: crawling, indexing, and searching. In this section we describe at a high level how each of these processes happens in Google. More detailed descriptions are in following sections.

Crawling is the most fragile task since it involves interacting with hundreds of thousands of web servers and various name servers which are all beyond the control of the system. Polite robot policies mandate that no server should be visited unreasonably often and the robots exclusion protocol should be heeded. Any errors in crawling invariably lead to many angry emails from webmasters. In fact, even polite behavior leads to some angry or confused emails from webmasters (see Section [?]).

In Google, the crawling is done by several distributed crawlers. There is a URLserver that sends lists of URLs to be fetched to the crawlers. The web pages that are fetched are then sent to the storeserver. The storeserver then compresses and stores the web pages into a repository. Every web page has an associated ID number called a docID which is assigned whenever a new URL is parsed out of a web page.  The indexing function is performed by the indexer and the sorter. The indexer performs a number of functions. It reads the repository, uncompresses the documents, and parses them. Each document is converted into a set of word occurrences called hits. The hits record the word, position in document, an appoximation of font size, and capitalization. The indexer distributes these hits into a set of ``barrels''.

The indexer performs another important function. It parses out all the links in every web page and stores important information about them in an anchors file. This file contains enough information to determine where each link points from and to and the text of the link.

The URLresolver read the anchors file and converts relative urls into absolute urls and in turn into docids. It puts the anchor text into the forward index, associated with the docid that the anchor points to. It also generates a database of links which are pairs of docids. The links database is used to compute PageRanks for all the documents.

The sorter takes the forward index, which is sorted by docID (this is a simplification, see Section ??), and resorts it by wordID to generate the inverted index. This is done in place so that little temporary space is needed for this operation. The sorter also produces a list of wordids and offsets into the inverted index. A program called dumplexicon takes this list together with the lexicon produced by the indexer and generates a new lexicon to be used by the searcher. The searcher is run by a web server and uses the lexicon built by dumplexicon together with the inverted index and the PageRanks to answer queries.

Major Data Structures

MAKE TABLE -- SIZE, NUMBER OF ENTRIES IN EACH!!!

Google is composed of a number of important data structures.  These data structures are optimized so that a large document collection can be crawled, indexed, and searched with little cost.  They are designed to scale well with large amounts of data and make use of technological changes that have happened in hardware. Although, CPUs and bulk input output rates have improved dramitically over the years, a disk seek still requires about 10 ms to complete.  Therefore, it would take three days to perform a seek for every document in our current 25 million page collection.  This is not completely unreasonable and further speedup is possible through parrellizing disk seeks. Seeks are a major bottleneck and it seems unlikely that seek performance will improve as rapidly as other parameters assuming traditional disk technology.  Therefore, Google is designed to avoid disk seeks whenever possible, and does not even preform a disk seek for every document.

BigFiles

Many of Google's data structures are are often larger than 4 gigabytes (the maximum file size addressable by a 32 bit integer) and they must be spread among drives.  We considered using a database but chose to implement our own bigfile data structure for portability, efficiency, and fine level control.  Currently BigFiles are virtual files spanning multiple file systems and are addressable by 64 bit integers.  The allocation among multiple file systems is handled automatically.  BigFiles also support rudimentary compression options. In the future, we hope to extend bigfiles to inherently support distributed processing by allowing multiple readers and writers.

Repository

The repository contains the full HTML of every web page.  They are compressed using zlib [see RFC 1950]. The choice of compression technique is a tradeoff between speed and compression ratio. We chose zlib's speed over a small but significant improvement in compression offered by bzip (XXX see Table). In the repository, the documents are stored one after the other and are prefixed by docid, length, and url. The repository requires no other data structures to be used in order to access it. This helps with data consistency and development, since we can rebuild all the other data structures from only the repository and an error file which lists crawl errors.

Document Index

The document index keeps information about each document.  It is a fixed width ISAM (Index sequential access mode) index, ordered by docid.  The information stored in each entry includes the current document status (reference seen, sent to urlserver, crawled, ...), a pointer into the repository, a document checksum, and various statistics (number of words, number of question marks, last crawl date, ...).  If the document has been crawled, it also contains a pointer into a variable width file called docinfo which contains its URL and title.  Otherwise the pointer points into the URLlist which contains just the URL. This design decision was driven by the desire to have a reasonably compact data structure, and the ability to fetch a record in one disk seek. The compactness is important since our list of 76.5 known URLs generated by crawling 26 million URLs takes over 4GB of space with no titles or empty space.

Additionally, there is a file which is used to convert URLs into docIDs.  It is a list of URL checksums with their corresponding docIDs and is sorted by checksum.  In order to find the docID of a particular URL, the URL's checksum is computed and a binary search is performed on the checksums file to find its docid.  URLs may be converted into docIDs in batch by doing a merge with this file.  This is the technique the URLresolver uses to turn URLs into docids.  This batch mode of update is crucial because otherwise we must perform one seek for every link which would take more than one month for our 25 million page, 322 million link dataset.

Lexicon

The lexicon has several different forms.  One important change from earlier systems is that the lexicon can fit in memory for a reasonable price.  In the current implementation we can keep the lexicon in memory on a machine with 256 MB of main memory.  The current lexicon contains 14 million words (though some rare words were not added to the lexicon).  It is implemented in two parts -- a list of the words (concatenated together but separated by nulls) and a hash table of pointers.  For various functions, the list of words has some auxilliary information which is beyond the scope of this paper to explain fully.

Hit Lists

Forward Index

The forward index is actually already partially sorted. It is stored in a number of barrels (we used 64). Each barrel holds a range of the lexicon. If a document contains words tha fall into a particular barrel, the docid is recorded into the barrel, followed by a list of wordid's with hitlists which correspond to those words. This scheme requires slightly higher storage because of duplicated docids but the difference is very small for a reasonable number of buckets and saves considerable time and coding complexity in the final phase done by the sorter.

Inverted Index

In order to generate the inverted index, the sorter takes each of the forward barrels and sorts it by wordid to produce an inverted barrel. This process happens one barrel at a time, thus requiring little temporary storage. Also, we parrallelize the sorting phase to use as many machines as we have simply by running multiple sorters, which can process different buckets at the same time. It required roughly about a day of wall clock time to sort our 26 million page index on three machines with 256MB of RAM each (a significant amount of this time was spent doing slow file IO over NFS).

Crawling the Web

Running a web crawler is a challenging task. There are tricky performance and reliability issues and even more importantly, there are social issues.

Efficient Crawling

In order to scale to hundreds of millions of web pages, Google has a fast distributed crawling system. A single urlserver serves lists of urls to a number of crawlers (we typically ran about 3). Both the urlserver and the crawlers are implemented in Python. Each crawler keeps roughly 300 connections open at once. This is necessary to retrieve web pages at a fast enough pace. At peak speeds, the system can crawl over 100 web pages per second using four crawlers. This amounts to roughly 600K per second of data which. Another major performance stress is DNS lookup. Each crawler maintains a its own DNS cache.

XXX TALK ABOUT IO MANAGEMENT - DIFFERENT QUEUES

Crawler Reliability

The WWW is very heterogeneous, which is a delight to surfers who like variety but quite a burden on a program which must handle anything. In our crawls, we encountered infinite web pages, infinite URLs, many varied kinds of communication errors, and anything else one might imagine. As an amusing example, a number of hosts had their IP address resolve to 127.0.0.1 - the local host. As a result, during early runs, we were surprised how many web pages matched terms from our own home page.

Social Issues

It turns out that running a crawler which connects to more than half a million servers, and generates tens of millions of log entries generates a fair amount of email and phone calls. Because of the vast number of people coming on line, there are always those users who do not know what a crawler is, because this is the first one they have seen. Almost daily, we receive an email something like, "Wow, you looked at a lot of pages from my web site. How did you like it?" There are also some people who do not know about the robots exclusion protocol, and think their page should be protected from indexing by a statement like, "This page is copyrighted and should not be indexed", which needless to say is difficult for web crawlers to understand. Also, because of the huge amount of data involved, unexpected things will happen. For example, our system was trying to crawl an online game. This resulted in lots of garbage messages in the middle of their game! It turns out this was an easy problem to fix. But this problem had not come up until we had downloaded tens of millions of pages. Because the immense variation in web pages and servers, it is virtually impossible to test a crawler without running it on large part of the Internet. Invariably, there are hundreds of obscure problems which may only occur on one page on the whole web and cause the crawler to crash, or worse, unpredictable or incorrect behavior.
 
Since such large numbers of people are looking at their web logs every day, if only one out of ten thousand people contact us we will be drowning in email. As a result, systems which access large parts of the Internet need to be designed to be very robust and carefully tested. Since large complex systems such as crawlers will invariably cause problems, there needs to be significant resources devoted reading the email and dealing with these problems as they come up.

8 Indexing the Web

The Parser

Dumping Data into Barrels

Sorting

10 Searching

The goal of searching is to provide quality search results efficiently.

The Ranking Function

Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the pagerank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no one factor can have too much influence. expand

Advertising and Mixed Motives

Currently, the predominant business model for commercial search engines is advertising. The goals of the advertising business model do not always correspond to providing quality search to users. For example, in our prototype search engine the top result for cellular phone is "The Effect of Cellular Phone Use Upon Driver Attention", a study which explains in great detail the distractions and risk associated with conversing on a cell phone while driving. This search result came up first because of its high importance as judged by the PageRank algorithm, an approximation of citation importance on the web [Page, 98]. It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media [Bagdikian 83], we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. Since it is very difficult even for experts to evaluate search engines, search engine bias is particularly insidious. A good example was OpenText, which was reported to be selling companies the right to be listed at the top of the search results for particular queries. This type of bias is much more insidious than advertising, because it is not clear who "deserves" to be there, and who is willing to pay money to be listed. This business model resulted in an uproar, and OpenText has ceased to be a viable search engine. But less blatant bias are likely to be tolerated by the market. For example, a search engine could add a small factor to search results from "friendly" companies, and subtract a factor from results from competitors. This type of bias is very difficult to detect but could still have a significant effect on the market. Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline’s home page when the airline’s name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm. 

Results

SHOW SOME SAMPLE RESULTS

Stats

SOME TABLES OF SIZES (Or from data structures section)

13 Conclusions

 

Scalability

<text is getting cheaper and cheaper to index -- are we really going toward distributed systems Mention WebBase.

Payload model
 
Mention data mining thing…
 
Repository to allow flexibility
 
Devil is in the details…
 

Mention cacheing strategies instead of crawling

Distrubute crawlers to multiple locations -- even have java version.

URLS

two papers from 97 web conference

[McBryan] WWWW Paper Webcrawler Description Rankdex Web Growth BOTW 1994 - Navigators

[Mauldin]

second 94 web conference Finding What People Want: Experiences with the WebCrawler Pinkerton

mg

RFC 1950 (zlib)

@InProceedings{hyper96*180,
  author =       "Ron Weiss and Bienvenido V{\'e}lez and Mark A. Sheldon
                 and Chanathip Manprempre and Peter Szilagyi and Andrzej
                 Duda and David K. Gifford",
  title =        "{HyPursuit}: {A} Hierarchical Network Search Engine
                 that Exploits Content-Link Hypertext Clustering",
  pages =        "180--193",
  ISBN =         "0-89791-778-2",
  booktitle =    "Proceedings of the 7th {ACM} Conference on Hypertext",
  month =        "16--20~" # mar,
  publisher =    "ACM Press",
  address =      "New York",
  year =         "1996",
}
from www1.cern.ch/WWW94/PerlimProcs.html first web conference
Keywords: indexing, searching
     Title: GENVL and WWWW: Tools for Taming the Web
     Author: Oliver A. McBryan <mcbryan@cs.colorado.edu>
     Institute: University of Colorado, Boulder, CO, US
     PostScript, Size: 70866, Printed: 11 pages
  • Keywords: indexing, searching
    Title: GENVL and WWWW: Tools for Taming the Web
    Author: Oliver A. McBryan <mcbryan@cs.colorado.edu>
    Institute: University of Colorado, Boulder, CO, US
    PostScript, Size: 70866, Printed: 11 pages

  • Keywords: indexing, technology, search, http
    Title: Aliweb - Archie-Like Indexing in the Web
    Author: Martijn Koster <m.koster@nexor.co.uk>
    Institute: NEXOR Ltd., UK
    PostScript, Size: 213616, Printed: 10 pages

  • Keywords: technology, caching, servers
    Title: What can Archives offer the World Wide Web
    Author: Neil Smith <ngs@ukc.ac.uk>
    Institute: Unix Hensa, The University of Kent at Canterbury, UK
    PostScript, Size: 442678, Printed: 12 pages

  • Keywords: indexing, spiders, repositories
    Title: The RBSE Spider - Balancing Effective Search Against Web Load
    Author: David Eichmann <eichmann@rbse.jsc.nasa.gov>
    Institute: RICIS, University of Houston, Clear Lake, US
    PostScript, Size: 55372, Printed: 8 pages

  • Keywords: indexing, searching
    Title: Lost in Hyperspace? Free Text Searches in the Web
    Author: Christian Neuss <neuss@igd.fhg.de>
    Author: Stefanie Höfling <hoefling@igd.fhg.de>
    Institute: Fraunhofer Institute for Computer Graphics, Darmstadt, DE
    PostScript, Size: 987758, Printed: 8 pages

  • Keywords: indexing, searching, wais, integration, servers
    Title: Towards Better Integration of Dynamic Search Technology and the World-Wide Web
    Author: Doug McKee <doug@navisoft.com>
    Institute: Navisoft, US
    PostScript, Size: 43638, Printed: 7 pages

  • Keywords: indexing, searching, browsing, mosaic
    Title: Information Retrieval in the World-Wide Web: Making Client-based searching feasible
    Author: R.D.J. Post <reinpost@win.tue.nl>
    Author: P.M.E. De Bra <debra@win.tue.nl>
    Institute: Eindhoven University of Technology, NL
    PostScript, Size: 162778, Printed: 10 pages

  •