The Anatomy of a Search Engine
Abstract
<THIS ABSTRACT IS OUTDATED. IT WILL BE REWRITTEN ONCE
THE PAPER IS MORE STABLE Global <better word? search engines are an
integral part of the World Wide Web. They have made the rapid growth of
the web possible by allowing users to find web pages relevant to their
interests in a sea of information.
Engineering a web search engine has many technical challenges. Web
search engines are very different from traditional search engines in that
they operate on hypertext. Therefore, they have to deal with crawling and
can make use of links for searching. Furthermore, due to rapid advance
in technology and web proliferation, creating a web search engine today
is very different from three years ago.
However, most recent development has happened behind company doors
and remains something of a black art. However, to engineer a search engine
is a challenging task. Search engines index tens to hundreds of millions
of web pages involving a comparable number of distinct terms. They answer
tens of millions of queries every day. Apart from the problems of scaling
traditional search techniques to data of this magnitude, there are new
technical challenges because of the demands of uncontrolled hypertext collections
which are only starting to be met.
mention hyoertext vs text mention disk performance, memory
1. Introduction
Since 1993, the World Wide Web has grown incredibly by almost any measure,
from 130 web servers in June 1993 to 650,000 web servers in January 1997
[?] serving 100 million web pages. Search engines have played a critical
role in making all of this information accessible. <expand
1.1 Web Search Engines: 1994 - 2000
Search engine technology has had to scale dramatically to keep up with
the growth of the web. One of the first web search engines, the World Wide
Web Worm (WWWW) [?] had an index of 110,000 web pages and web accessible
documents in 1994. <include intermediate numbers here As of November,
1997, the top search engines claim to index from 3 million (WebCrawler)
to 70 million web documents (see Table ??). It is foreseeable that by the
year 2000, a comprehensive index of the WWW will contain over a billion
documents. At the same time, the number of queries search engines handle
has grown incredibly too. In March and April 1994, the World Wide Web Worm
received an average of about 1500 queries per day. In November 1997, Altavista
claimed it handled roughly 20 million queries per day. With the increasing
web population and increasing user agents which query search engines, it
is likely that top search engines will handle hundreds of millions of queries
per day by the year 2000. In this paper, we present the Google search engine
developed at Stanford. The goal of Google is to address many of the problems
introduced by scaling search engine technology to such numbers.
1.2. Scaling with the Web
Creating a search engine which scales to even today's web presents many
challenges. First, there are the performance considerations. It is necessary
to have fast crawling technology to gather the web documents and keep them
up to date. Storage space must be used efficiently to store indices and,
optionally, the documents themselves. The indexing system must process
hundreds of gigabytes of data efficiently. Queries must be handled quickly,
at a rate of hundreds to thousands per second. <maybe we should mention
this in future work instead
All of these tasks are becoming increasingly difficult as the Web grows.
However, hardware performance and cost has also been improving dramatically
to partially offset the difficulty. There are, however, several notable
exceptions this progress. Disk seek times have remained fairly high to
an extent that it is possible to read 100K in roughly the same time as
performing one disk seek. Second, operating system robustness still leaves
much to be desired.
In designing Google, we have considered both the rate of growth of the
WWW and technological changes. Google is designed to be scalable to extremely
large data sets. It makes efficient use of storage space to store the index.
Google also stores all of the actual documents it crawls in compressed
form to allow it to highlight search matches, and support other research
such as data mining [dynamic data mining reference].
1.3 Quality of Search
Aside from performance, the growing number of web documents introduces
quality of search issues. In 1994, some people believed that a complete
search index would make it possible to find anything easily. According
to [?], ``The best navigation service should make it easy to find almost
anything on the Web (once all the data is entered).'' However, the WWW
of 1997 is quite different. Anyone who has used a search engine recently,
can readily testify that the completeness of the index is not the only
factor in the quality search results. ``Junk results'' often wash out any
results that a user is interested in. In fact, as of November 1997, only
one of the top four commercial search engines finds itself (returns its
own search page in response to its name in the top ten results) There
are no clear solutions to this problem. However, there is some optimism
in the use of more hypertextual information rather than relying on only
the text of documents. In particular, the use of link structure [?] and
link text. Google makes use of both link structure and anchor text (see
Sections ?? and ??). <expand
Aside from tremendous growth, WWW has also become increasingly commercial
over time. In 1993, 1.5% of web servers were on .com domains. This number
grew to over 60% in 1997. At the same time, search engines have migrated
from the academic domain [?, ?] to the commercial [?, ?, ?, ?].
1.4 Advertising and Mixed Motives
Currently, the predominant business model for commercial search engines
is advertising. The goals of the advertising business model do not always
correspond to providing quality search to users. For example, in our prototype
search engine the top result for cellular phone is "The Effect of Cellular
Phone Use Upon Driver Attention", a study which explains in great detail
the distractions and risk associated with conversing on a cell phone while
driving. This search result came up first because of its high importance
as judged by the PageRank algorithm, an approximation of citation importance
on the web [Page, 98]. It is clear that a search engine which was taking
money for showing cellular phone ads would have difficulty justifying the
page that our system returned to its paying advertisers. For this type
of reason and historical experience with other media [Bagdikian 83], we
expect that advertising funded search engines will be inherently biased
towards the advertisers and away from the needs of the consumers. Since
it is very difficult even for experts to evaluate search engines, search
engine bias is particularly insidious. A good example was [verity’s search
engine], which was reported to be selling companies the right to be listed
at the top of the search results for particular queries. This type of bias
is much more insidious than advertising, because it is not clear who "deserves"
to be there, and who is willing to pay money to be listed. This business
model resulted in an uproar, and XXXXXXX has ceased to be a viable search
engine. But less blatant bias are likely to be tolerated by the market.
For example, a search engine could add a small factor to search results
from "friendly" companies, and subtract a factor from results from competitors.
This type of bias is very difficult to detect but could still have a significant
affect on the market. Furthermore, advertising income often provides an
incentive to provide poor quality search results. For example, we noticed
a major search engine would not return a large airline’s home page when
the airline’s name was given as a query. It so happened that the airline
had placed an expensive ad, linked to the query that was its name. A better
search engine would not have required this ad, and possibly resulted in
the loss of the revenue from the airline. In general, it could be argued
from the consumer point of view that the better the search engine is, the
fewer advertisements will be needed for the consumer to find what they
want. This of course erodes the advertising supported business model of
the existing search engines. However, there will always be money from advertisers
who want a customer to switch products, or have something that is genuinely
new. But we believe the issue of advertising causes enough mixed incentives
that it is crucial to have a competitive search engine that is transparent
and in the academic realm. Currently most search engine development
has gone on at companies with little publication of technical details.
With Google, our goal is to push more development and understanding into
the academic realm.
2. System Features
The Google search engine has several features that help it produce better
quality results. First, it makes use of link structure of the Web to calculate
a quality ranking for each web page. This ranking is called PageRank [?].
Second, Google treats link text specially for search purposes.
2.1 PageRank
The PageRank of a web page is a rough measure of the page's importance.
Roughly speaking, a page is important if there are many pages that point
to it or if the pages that point to it are important. In order to calculate
page rank, the entire link structure of the Web (or the portion that has
been crawled) is used.
2.2 Anchor Text
The text of links is treated in a special way in our search engine. Normally,
the text of a link gets associated with the page that the link is on. Instead,
we associate it with both the page the link is on and the page the link
points to. This has several advantages. First, anchors often provide more
accurate desciptions of web pages than the pages themselves (see Section
??). Second, anchors may exist for documents which cannot be indexed by
a text- based search engine such as images, programs, and databases. Third,
this makes it possible to return web pages which have not actually been
crawled.
This idea was implemented in the World Wide Web Worm [refXXXX] especially
for the last two reasons. We use it mostly for the first reason which is
that anchor text can help provide better quality results. Using anchor
text efficiently is technically difficult because of the large amounts
of data which must be moved around. In our current crawl of 26 million
pages, we had over 259 million anchors which we indexed.
2.3 WebBase
<Reference mining paper
3 Related Work
<Advertising limits resources per query <Anchors
4 System Overview
A web search engine must perform several major functions: crawling, indexing,
and searching. In this section we describe at a high level how each of
these processes happens in Google. More detailed descriptions are in following
sections.
Crawling is the most fragile task since it involves interacting with
hundreds of thousands of web servers and various name servers which are
all beyond the control of the system. Polite robot policies mandate that
no server should be visited unreasonably often and the robots exculsion
protocol [see backrub FAQ page for ref XXXXX] should be heeded. Any errors
in crawling invariably lead to many angry emails from webmasters. In fact,
even polite behavior leads to some angry or confused emails from webmasters
(see Section [?]).
In Google, the crawling is done by several distributed crawlers. There
is a URLserver that sends lists of URLs to be fetched to the crawlers.
The web pages that are fetched are then sent to the storeserver. The storeserver
then compresses and stores the web pages into a repository.
Every web page has an associated ID number called a docID which is assigned
whenever a new URL is parsed out of a web page. The indexing function
is performed by the indexer and the sorter. The indexer performs a number
of functions. It reads the repository, uncompresses the documents, and
parses them. Each document is converted into a set of word occurrences
called hits. The hits record the word, position in document, an appoximation
of font size, and capitalization. The indexer distributes these hits into
a set of ``barrels''.
The indexer performs another important function. It parses out all the
links in every web page and stores important information about them in
an anchors file. This file contains enough information to determine where
each link points from and to and the text of the link. This file is later
read by the urlresolver which converts relative urls into absolute urls
and in turn into docids. The urlresolver also generates a part of the forward
index for the sorter (see Section ??).
The sorter takes the forward index, which is sorted by docID (this
is a simplification, see Section ??), and resorts it by wordID to generate
the inverted index.
The searcher is run by a web server and uses the inverted index to answer
queries. It takes advantage of a number of features.
Major Data Structures
Google is composed of a number of important data structures. These
data structures are optimized so that a large document collection can be
crawled, indexed, and searched with little cost. They are designed
to scale well with data and make use of technological improvements.
An important consideration is that a disk seek still requires
about 10 ms to complete. Therefore, it takes three days to perform
a seek for every document in our current 25 million page collection.
This is not completely unreasonable and further speedup is possible through
disk parallelism. However, seeks are a major bottleneck for system
performance and there is no clear evidence that seek performance will improve
as rapidly as other parameters. Therefore, \se\ is designed not to
perform a disk seek for every document.
Repository
The repository contains the full HTML of ever web page. They are
compressed using zlib [REF], a Lempel-Ziv compressor.
BigFiles
Many of Google's data structures are large. They are often larger
than 4 gigabytes (the maximum file size addressable by a 32 bit integer)
and they must be spread among drives. We considered using a database
but chose to implement our own bigfile data structure for portability,
efficiency, and fine level control. Currently bigfiles are virtual
files spanning multiple file systems and are addressable by 64 bit integers.
The allocation among multiple file systems is handled automatically.
In the future, we hope to extend bigfiles to inherently support distributed
processing.
Url List
The repository is a very simple data structure. It stores documents
in compressed form sequentially. The documents are prefixed by their
docid and url.
Document Index
The document index keeps information about each document. It is a
fixed width ISAM index, ordered by docid. The information stored
in each entry includes the current document status (reference seen, sent
to urlserver, crawled, ...), a pointer into the repository, a document
checksum, and various statistics (number of words, number of question marks,
last crawl date, ...). If the document has been crawled, it also
contains a pointer into a variable width file called docinfo which contains
its url and title. Otherwise the pointer points into the urllist
which contains just the url.
Additionally, there is a file which is used to convert URLs into docIDs.
It is a list of URL checksums with their corresponding docIDs and is sorted
by checksum. In order to find the docID of a particular URL, the
URL's checksum is computed and a binary search is performed on the checksums
file to find its docid. URLs may be converted into docIDs in batch
by doing a merge with this file. This is the technique the URLresolver
uses to turn URLs into docids. This batch mode of update is crucial
because otherwise we must perform one seek for every link which would take
roughly one month for our 25 million page, 250XXXXXXXX million link dataset.
Lexicon
The lexicon has several different forms. One important change from
earlier systems [WAIS] is that the lexicon can fit in memory for a reasonable
price. In the current implementation we can keep the lexicon in memory
on a machine with only 256 MB of main memory. The current lexicon
contains 14 million words (though some very rare words were not added to
the lexicon). It is implemented in two parts -- a list of the words
(concatenated together but separated by nulls) and a hash table of pointers.
For various functions, the list of words has some auxilliary information.????????what
is it???????
Forward Buckets
Reverse Buckets
Crawling the Web
Social Issues
It turns out that running a crawler which connects to more than half a
million servers, and generates tens of millions of log entries generates
a fair amount of email and phone calls. Because of the vast number of people
coming on line, there are always those users who do not know what a crawler
is, because this is the first one they have seen. Almost daily, we receive
an email something like, "Wow, you looked at a lot of pages from my web
site. How did you like it?" There are also some people who do not know
about the robots exclusion protocol, and think their page should be protected
from indexing by a statement like, "This page is copyrighted and shouldn’t
be indexed", which needless to say is difficult for web crawlers to understand.
Also, because of the huge amount of data involved, unexpected things will
happen. For example, our system was trying to crawl an online game. This
resulted in lots of garbage messages in the middle of their game! It turns
out this was an easy problem to fix. But this problem had not come up until
we had downloaded tens of millions of pages. Because the immense variation
in web pages and servers, it is virtually impossible to test a crawler
without running it on large part of the Internet. Invariably, there are
hundreds of obscure problems which may only occur on one page on the whole
web and cause the crawler to crash, or worse, unpredictable or incorrect
behavior.
Crawlers make huge demands on shared system resources like the network.
However our greatest local problem was due to name service lookups, which
convert names like www.stanford.edu into its unique IP address, 171.64.14.251.
To alleviate slowing caused by fetching this data, our systems cache this
data internally. We also have to run our own name server, since none of
the other name servers on campus can deal with the additional load gracefully.
There are always a lot of social interactions with local system administrators
when running a crawler.
Since such large numbers of people are looking at their web logs every
day, it only one out of a million people contact us we will be drowning
in email. As a result, systems which access large parts of the Internet
need to be designed to cause as few problems as possible. Since large complex
systems such as crawlers will invariably cause problems, there needs to
be significant resources devoted reading the email and dealing with these
problems as they come up.
6.2 Resolution
6.3 Parsing
7 Links
8 Indexing the Web
9 Inverting the Index
10 Searching
<PageRank, font size <repository allows returning matches
11 Scalability
<text is getting cheaper and cheaper to index -- are we really going
toward distributed systems
12 Results, Stats
13 Conclusion
Mention data mining thing…
Repository to allow flexibility
Devil is in the details…