The Anatomy of a Search Engine
Abstract
Global search engines are an integral part of the World Wide Web. They
have made the rapid growth of the web possible by allowing users to
find web pages relevant to their interests in a sea of information.
However, to engineer a search engine is a challenging task. Search
engines index tens to hundreds of millions of web pages involving a
comparable number of distinct terms. They answer tens of millions of
queries every day. Apart from the problems of scaling traditional
search techniques to data of this magnitude, there are new technical
challenges because of the demands of uncontrolled hypertext
collections which are only starting to be met.
Engineering a web search engine has many technical challenges. Web
search engines are very different from traditional search engines in that
they operate on hypertext. Therefore, they have to deal with crawling and
can make use of links for searching. Furthermore, due to rapid advance
in technology and web proliferation, creating a web search engine today
is very different from three years ago.
In this paper, we present Google, a prototype of a scalable search
engine. Google is designed to crawl and index the Web efficiently,
using limited disk storage. To address scalability in search, Google
makes use of hypertextual information found in links to produce higher
quality results.
1. Introduction
Since 1993, the World Wide Web has grown incredibly by almost any measure,
from 130 web servers in June 1993 to 650,000 web servers in January 1997
[?] serving 100 million web pages. By making all of this information
easy for users to locate, search engines have played a critical
role in allowing the Web to scale to its present size. In this
paper, we address the scalability of search engines in terms of
both performance and search quality.
1.1 Web Search Engines: 1994 - 2000
Search engine technology has had to scale dramatically to keep up with
the growth of the web. One of the first web search engines, the World
Wide Web Worm (WWWW) [?] had an index of 110,000 web pages and web
accessible documents in 1994. <include intermediate numbers here As
of November, 1997, the top search engines claim to index from 3
million (WebCrawler) to 70 million web documents (see Table ??). It is
foreseeable that by the year 2000, a comprehensive index of the WWW
will contain over a billion documents. At the same time, the number of
queries search engines handle has grown incredibly too. In March and
April 1994, the World Wide Web Worm received an average of about 1500
queries per day. In November 1997, Altavista claimed it handled
roughly 20 million queries per day. With the increasing web population
and increasing user agents which query search engines, it is likely
that top search engines will handle hundreds of millions of queries
per day by the year 2000. In this paper, we present the Google search
engine developed at Stanford. The goal of Google is to address many of
the problems introduced by scaling search engine technology to such
extraordinary numbers.
1.2. Scaling with the Web
Creating a search engine which scales to even today's web presents many
challenges. First, there are the performance considerations. It is necessary
to have fast crawling technology to gather the web documents and keep them
up to date. Storage space must be used efficiently to store indices and,
optionally, the documents themselves. The indexing system must process
hundreds of gigabytes of data efficiently. Queries must be handled quickly,
at a rate of hundreds to thousands per second.
All of these tasks are becoming increasingly difficult as the Web
grows. However, hardware performance and cost has also been improving
dramatically to partially offset the difficulty. Disk cost has fallen
to below $50 per gigabyte with transfer rates close to 10MB per
second. Memory prices are below $5 per megabyte and 300MHz CPUs are
available for little cost. There are, however, several notable
exceptions this progress. Disk seek times have remained fairly high to
an extent that it is possible to read 100K in roughly the same time as
performing one disk seek. Anothe problem is that operating system
robustness still leaves much to be desired.
In designing Google, we have considered both the rate of growth of the
WWW and technological changes. Google is designed to be scalable to extremely
large data sets. It makes efficient use of storage space to store the index.
Google also stores all of the actual documents it crawls in compressed
form to allow it to highlight search matches, and support other research
such as data mining [dynamic data mining reference].
1.3 Quality of Search
Aside from performance, the growing number of web documents introduces
quality of search issues. In 1994, some people believed that a complete
search index would make it possible to find anything easily. According
to [?], ``The best navigation service should make it easy to find almost
anything on the Web (once all the data is entered).'' However, the WWW
of 1997 is quite different. Anyone who has used a search engine recently,
can readily testify that the completeness of the index is not the only
factor in the quality search results. ``Junk results'' often wash out any
results that a user is interested in. In fact, as of November 1997, only
one of the top four commercial search engines finds itself (returns its
own search page in response to its name in the top ten results) There
are no clear solutions to this problem. However, there is some optimism
in the use of more hypertextual information rather than relying on only
the text of documents. In particular, link structure [?] and link text
provide a lot of information for making relevance judgements and
quality filtering. Google makes use of both link structure and anchor text (see
Sections ?? and ??).
Academic vs. Commercial Search Engines
Aside from tremendous growth, WWW has also become increasingly
commercial over time. In 1993, 1.5% of web servers were on .com
domains. This number grew to over 60% in 1997. At the same time,
search engines have migrated from the academic domain [?, ?] to the
commercial [?, ?, ?, ?]. Currently most search engine development has
gone on at companies with little publication of technical details.
This causes search engine technology to remain largely a black art and
to be advertising oriented (see Section ?). With Google, our goal is
to push more development and understanding into the academic realm.
2. System Features
The Google search engine has several features that help it produce better
quality results. First, it makes use of link structure of the Web to calculate
a quality ranking for each web page. This ranking is called PageRank [?].
Second, Google treats link text specially for search purposes.
2.1 PageRank
this needs to be expanded
The PageRank of a web page is a rough measure of the page's importance.
Roughly speaking, a page is important if there are many pages that point
to it or if the pages that point to it are important. In order to calculate
page rank, the entire link structure of the Web (or the portion that has
been crawled) is used.
2.2 Anchor Text
The text of links is treated in a special way in our search engine. Normally,
the text of a link gets associated with the page that the link is on. Instead,
we associate it with both the page the link is on and the page the link
points to. This has several advantages. First, anchors often provide more
accurate desciptions of web pages than the pages themselves (see Section
??). Second, anchors may exist for documents which cannot be indexed by
a text- based search engine such as images, programs, and databases. Third,
this makes it possible to return web pages which have not actually been
crawled.
This idea was implemented in the World Wide Web Worm [refXXXX] especially
for the last two reasons. We use it mostly for the first reason which is
that anchor text can help provide better quality results. Using anchor
text efficiently is technically difficult because of the large amounts
of data which must be moved around. In our current crawl of 26 million
pages, we had over 259 million anchors which we indexed.
2.3 Other Features
Aside from PageRank and the use of anchor text, Google has several
other features. First, it has location information for all hits
and so it makes extensive use of proximity in search. Second,
Google keeps track of font and size information of words. Words
in a larger or bolder font are weighted higher than other words.
Third, full text of pages is avialable.
3 Related Work
Work on search in information retrieval systems goes back many years.
Search on the web has a much shorter history. The WWWW was one of the
first web search engines. It was subsequently followed by WebCrawler,
Lycos, Altavista, Inktomi, Infoseek, Excite, HotBot, and others. Of
these more recent search engines, little has been published:
WWWW
lycos,
webcrawler,
Compared to the growth of the Web and the importance of search engines
there are precious few documents about these search engines.
According to Michael Mauldin (chief scientist, Lycos Inc), "the
various services (including Lycos) closely guard the details of these
databases".
However, there has been a fair amount of work on specific features of
search engines.
4 System Overview
A web search engine must perform several major functions: crawling, indexing,
and searching. In this section we describe at a high level how each of
these processes happens in Google. More detailed descriptions are in following
sections.
Crawling is the most fragile task since it involves interacting with
hundreds of thousands of web servers and various name servers which are
all beyond the control of the system. Polite robot policies mandate that
no server should be visited unreasonably often and the robots exculsion
protocol [see backrub FAQ page for ref XXXXX] should be heeded. Any errors
in crawling invariably lead to many angry emails from webmasters. In fact,
even polite behavior leads to some angry or confused emails from webmasters
(see Section [?]).
In Google, the crawling is done by several distributed crawlers. There
is a URLserver that sends lists of URLs to be fetched to the crawlers.
The web pages that are fetched are then sent to the storeserver. The storeserver
then compresses and stores the web pages into a repository.
Every web page has an associated ID number called a docID which is assigned
whenever a new URL is parsed out of a web page. The indexing function
is performed by the indexer and the sorter. The indexer performs a number
of functions. It reads the repository, uncompresses the documents, and
parses them. Each document is converted into a set of word occurrences
called hits. The hits record the word, position in document, an appoximation
of font size, and capitalization. The indexer distributes these hits into
a set of ``barrels''.
The indexer performs another important function. It parses out all the
links in every web page and stores important information about them in
an anchors file. This file contains enough information to determine where
each link points from and to and the text of the link.
The UrlResolver read the anchors file and converts relative urls into
absolute urls and in turn into docids. It puts the anchor text into
the forward index, associated with the docid that the anchor points
to. It also generates a database of links which are pairs of docids.
The links database is used to compute pageranks for all the documents.
The sorter takes the forward index, which is sorted by docID (this
is a simplification, see Section ??), and resorts it by wordID to generate
the inverted index. This is done in place so that little
temporary space is needed for this operation. The sorter also
produces a list of wordids and offsets into the inverted index. A
program called dumplexicon takes this list together with the lexicon
produced by the indexer and generates a new lexicon to be used by the
searcher.
The searcher is run by a web server and uses the lexicon built by
dumplexicon together with the inverted index and the pageranks to
answer queries.
Major Data Structures
Google is composed of a number of important data structures. These
data structures are optimized so that a large document collection can be
crawled, indexed, and searched with little cost. They are designed
to scale well with data and make use of technological improvements.
An important consideration is that a disk seek still requires
about 10 ms to complete. Therefore, it takes three days to perform
a seek for every document in our current 25 million page collection.
This is not completely unreasonable and further speedup is possible through
disk parallelism. However, seeks are a major bottleneck for system
performance and there is no clear evidence that seek performance will improve
as rapidly as other parameters. Therefore, Google is designed not to
perform a disk seek for every document.
BigFiles
Many of Google's data structures are large. They are often larger
than 4 gigabytes (the maximum file size addressable by a 32 bit integer)
and they must be spread among drives. We considered using a database
but chose to implement our own bigfile data structure for portability,
efficiency, and fine level control. Currently bigfiles are virtual
files spanning multiple file systems and are addressable by 64 bit integers.
The allocation among multiple file systems is handled automatically.
In the future, we hope to extend bigfiles to inherently support distributed
processing.
Repository
The repository contains the full HTML of every web page. They are
compressed using zlib [REF], a Lempel-Ziv compressor.
The choice of compression technique is a tradeoff between speed and
compression ratio. We chose zlib's speed over a small but
significant improvement in compression offered by bzip (see Table).
In the repository, the documents are stored one after the other and are prefixed by
docid, length, and url. The repository requires no other data
structures to be used in order to access it.
Document Index
The document index keeps information about each document. It is a
fixed width ISAM index, ordered by docid. The information stored
in each entry includes the current document status (reference seen, sent
to urlserver, crawled, ...), a pointer into the repository, a document
checksum, and various statistics (number of words, number of question marks,
last crawl date, ...). If the document has been crawled, it also
contains a pointer into a variable width file called docinfo which contains
its url and title. Otherwise the pointer points into the urllist
which contains just the url.
Additionally, there is a file which is used to convert URLs into docIDs.
It is a list of URL checksums with their corresponding docIDs and is sorted
by checksum. In order to find the docID of a particular URL, the
URL's checksum is computed and a binary search is performed on the checksums
file to find its docid. URLs may be converted into docIDs in batch
by doing a merge with this file. This is the technique the URLresolver
uses to turn URLs into docids. This batch mode of update is crucial
because otherwise we must perform one seek for every link which would take
roughly one month for our 25 million page, 250XXXXXXXX million link dataset.
Lexicon
The lexicon has several different forms. One important change from
earlier systems [WAIS] is that the lexicon can fit in memory for a reasonable
price. In the current implementation we can keep the lexicon in memory
on a machine with only 256 MB of main memory. The current lexicon
contains 14 million words (though some very rare words were not added to
the lexicon). It is implemented in two parts -- a list of the words
(concatenated together but separated by nulls) and a hash table of pointers.
For various functions, the list of words has some auxilliary information.????????what
is it???????
Hit Lists
Forward Index
The forward index is actually already partially sorted. It is stored
in a number of barrels (we used 64). Each barrel holds a range
of the lexicon. If a document contains words tha fall into a
particular barrel, the docid is recorded into the barrel, followed
by a list of wordid's with hitlists which correspond to those
words. This scheme requires slightly higher storage because of
duplicated docids but the difference is very small for a
reasonable number of buckets and there is considerable time saved
by the sorter.
Inverted Index
In order to generate the inverted index, the sorter takes each of
the forward barrels and sorts it by wordid to produce an inverted
barrel. This process happens one barrel at a time, thus requiring
little temporary storage.
Crawling the Web
Running a web crawler is a challenging task. There are tricky
performance and reliability issues and even more importantly, there
are social issues.
Efficient Crawling
In order to scale to hundreds of millions of web pages, Google has a
fast distributed crawling system. A single urlserver serves lists of
urls to a number of crawlers (we typically ran about 3). Both the
urlserver and the crawlers are implemented in Python. Each crawler
keeps roughly 300 connections open at once. This is necessary to
retrieve web pages at a fast enough pace. At peak speeds, the system
can crawl 100 web pages per second using three crawlers. This amounts
to roughly 600K per second of data which, after protocol overhead,
uses about 10Mbits out of Stanford's 45 Mbit connection to the
Internet. Another major performance stress is DNS lookup. Each
crawler maintains a its own DNS cache.
TALK ABOUT IO MANAGEMENT - DIFFERENT QUEUES
Crawler Reliability
The WWW is very heterogeneous, which is a delight to surfers who like
variety but quite a burden on a program which must handle anything.
In our crawls, we encountered infinite web pages, infinite urls, many
varied kinds of communication errors, and anything else one might
imagine. As an amusing example, a number of hosts had their IP
address resolve to 127.0.0.1 - the local host. During early runs, we
were surprised how many web pages matched terms from our home page.
Social Issues
It turns out that running a crawler which connects to more than half a
million servers, and generates tens of millions of log entries generates
a fair amount of email and phone calls. Because of the vast number of people
coming on line, there are always those users who do not know what a crawler
is, because this is the first one they have seen. Almost daily, we receive
an email something like, "Wow, you looked at a lot of pages from my web
site. How did you like it?" There are also some people who do not know
about the robots exclusion protocol, and think their page should be protected
from indexing by a statement like, "This page is copyrighted and shouldn’t
be indexed", which needless to say is difficult for web crawlers to understand.
Also, because of the huge amount of data involved, unexpected things will
happen. For example, our system was trying to crawl an online game. This
resulted in lots of garbage messages in the middle of their game! It turns
out this was an easy problem to fix. But this problem had not come up until
we had downloaded tens of millions of pages. Because the immense variation
in web pages and servers, it is virtually impossible to test a crawler
without running it on large part of the Internet. Invariably, there are
hundreds of obscure problems which may only occur on one page on the whole
web and cause the crawler to crash, or worse, unpredictable or incorrect
behavior.
Since such large numbers of people are looking at their web logs every
day, if only one out of ten thousand people contact us we will be drowning
in email. As a result, systems which access large parts of the Internet
need to be designed to cause as few problems as possible. Since large complex
systems such as crawlers will invariably cause problems, there needs to
be significant resources devoted reading the email and dealing with these
problems as they come up.
8 Indexing the Web
The Parser
Dumping Data into Barrels
Sorting
10 Searching
The goal of searching is to provide quality search results efficiently.
The Ranking Function
Google maintains much more information about web documents than typical
search engines. Every hitlist includes position, font, and
capitalization information. Additionally, we factor in hits from
anchor text and the pagerank of the document.
Combining all of this information into a rank is difficult.
We designed our ranking function so that no one factor can have too
much influence.expand
Results, Stats
13 Conclusion
Scalability
<text is getting cheaper and cheaper to index -- are we really going
toward distributed systems
Mention WebBase.
Mention data mining thing…
Repository to allow flexibility
Devil is in the details…
Advertising and Mixed Motives
Currently, the predominant business model for commercial search engines
is advertising. The goals of the advertising business model do not always
correspond to providing quality search to users. For example, in our prototype
search engine the top result for cellular phone is "The Effect of Cellular
Phone Use Upon Driver Attention", a study which explains in great detail
the distractions and risk associated with conversing on a cell phone while
driving. This search result came up first because of its high importance
as judged by the PageRank algorithm, an approximation of citation importance
on the web [Page, 98]. It is clear that a search engine which was taking
money for showing cellular phone ads would have difficulty justifying the
page that our system returned to its paying advertisers. For this type
of reason and historical experience with other media [Bagdikian 83], we
expect that advertising funded search engines will be inherently biased
towards the advertisers and away from the needs of the consumers. Since
it is very difficult even for experts to evaluate search engines, search
engine bias is particularly insidious. A good example was [verity’s search
engine], which was reported to be selling companies the right to be listed
at the top of the search results for particular queries. This type of bias
is much more insidious than advertising, because it is not clear who "deserves"
to be there, and who is willing to pay money to be listed. This business
model resulted in an uproar, and XXXXXXX has ceased to be a viable search
engine. But less blatant bias are likely to be tolerated by the market.
For example, a search engine could add a small factor to search results
from "friendly" companies, and subtract a factor from results from competitors.
This type of bias is very difficult to detect but could still have a significant
effect on the market. Furthermore, advertising income often provides an
incentive to provide poor quality search results. For example, we noticed
a major search engine would not return a large airline’s home page when
the airline’s name was given as a query. It so happened that the airline
had placed an expensive ad, linked to the query that was its name. A better
search engine would not have required this ad, and possibly resulted in
the loss of the revenue from the airline. In general, it could be argued
from the consumer point of view that the better the search engine is, the
fewer advertisements will be needed for the consumer to find what they
want. This of course erodes the advertising supported business model of
the existing search engines. However, there will always be money from advertisers
who want a customer to switch products, or have something that is genuinely
new. But we believe the issue of advertising causes enough mixed incentives
that it is crucial to have a competitive search engine that is transparent
and in the academic realm.
URLS
WWWW Paper
Lycos Description
Webcrawler Description
Rankdex
Web Growth
BOTW 1994 -
Navigators
@InProceedings{hyper96*180,
author = "Ron Weiss and Bienvenido V{\'e}lez and Mark A. Sheldon
and Chanathip Manprempre and Peter Szilagyi and Andrzej
Duda and David K. Gifford",
title = "{HyPursuit}: {A} Hierarchical Network Search Engine
that Exploits Content-Link Hypertext Clustering",
pages = "180--193",
ISBN = "0-89791-778-2",
booktitle = "Proceedings of the 7th {ACM} Conference on Hypertext",
month = "16--20~" # mar,
publisher = "ACM Press",
address = "New York",
year = "1996",
}