Traveling the Electronic Highway: The Internet

Maps, Encounters, Directions.

Master Copy on Earth. Draft 27Nov 1993,rev29nov,24dec,18Jun 1994, Dec 1997, 9, 13 Jan, 28Apr 1998. Partially updated 7 Jan.2000.

©Gio Wiederhold and CS99I students, 1998

Previous chapter: Introduction - Next chapter: Ubiquitous Computing

[© New Yorker Magazine, March 1993] On the Internet nobody knows that you are a dog'


The Internet provides flexible communication among computers that are connected to it and obey its conventions. The types of services provided are listed in Section Functions. As of the end of 1993 more than <10000> computers had direct access to it, and an even larger number can access the Internet indirectly, since their networks are connected via gateway computers to the Internet. The lack of central control, combined with standards for connecting to it permits the Information highway system to grow by accretion, although some off the paths to get there way be torturous and muddy Gleick94. Section Alternatives describes some alternatives used by other networks, and Tables at the end of the chapter provide a summary of in tabular form.

The Internet is not the only contender for providing information highways. Commercial suppliers, as the telephone companies and cable companies have a role to play as well. The new technologies, as described in Section Technology are available to all comers, and are needed to deal with the expected demand. New charging methods will have to be established as well, since long-distance * toll charges oriented towards continuous voice hookups are unaffordable for the service businesses growing along the information highways. Service companies (ISPs) provide local telephone access points, so that most subscribers can benefit from unlimited local calling tariffs available in the United States. In many other countries local calls are also billed by length of call. In 1993, in addition to about 15 million Internet users, there were more than 3 million subscribers to the various commercial services, often experimenting with new offerings. This number of paying subscribers was about equal to that of Newsweek magazine, but is increasing rapidly. The patterns of use for applications available on networks are not stable. Much of the initial motivation for establishing networks was to give scientists access to supercomputers, although the scientist soon found other uses. It is hard to predict what commercial usage patterns will emerge. Businesses that guess well will have a chance to grow rapidly, others are likely to wither or fail.

The Internet, the tollroads of the future, and their regional components serve long- distance traffic. They are all Wide Area Networks (WAN). Once the traffic reaches the local area, either the local telephone network will be used, to reach individual homes, or a corporate switchboard can provide a high-speed gateway for corporate users. Since much corporate traffic stays local, there is often a powerful local area network (LAN) serving local participants. Local traffic can move rapidly within the LAN, and move at somewhat lower speed to or from the outside world through the gateway to the WAN. We present a typical LAN technology, the Ethernet, in Section ethernet.


The Internet is a direct successor of the ARPA-net. ARPA, the Advanced Research Projects Agency of the Department of Defense (DARPA from 1972 to 1993) had been funding computers and computer projects since shortly after its inception in 1958. These computers represented costly resources (a scientific computer with a 200K character memory cost $<2M> in 1965). Many different types of computers were represented. Eventually, time-sharing operating systems were established on many of those computers, which permitted multiple local users to program and compute at the same time. The various computers had different capabilities and ran different programs. In order to improve use of such costly resources ARPA, starting in 1969, sponsored development of a network that would permit remote users at other ARPA-sponsored sites to participate in remote projects [Dennett85]

Some of the needed technology had been developed for ARPA in another context. Providing high-speed, reliable communication for computers via conventional cables was difficult among the islands in Hawaii. Transmission via a satellite was a natural alternative for the planned * Aloha-net in 1962, but required new technology. Since a satellite signal is broadcast through the common ether, it is heard by all receivers, although it is intended only for one. A solution, devised by Norman Abramson, first at Stanford University, later (1963?) at the University of Hawaii, was to put an * address, identifying the intended receiver, in front of every message. The receivers were programmed to ignore messages which had addresses indicated that they were not intended for them. Each Internet node maintains a table with good, currently useful forwarding addresses, as well as addresses of one or more name servers. Name servers are designated nodes that will look up other addresses.

When many senders share a common path, a message going to one receiving party blocks the path for others trying to communicate to others. To avoid traffic delays due to interference caused by long messages, the Aloha-net broke long messages into packets of modest size. Packets from distinct messages travel interleaved along the paths, so that a short message can get through, although a long message will take longer to complete its trip to the destination. The common transmission path is * time-shared. This second technique, packet-based transmission, was also adopted for the Arpa-net, so that the highway could be fairly shared.

The major highways for the Arpa-net were high-speed telephone lines leased from AT\&T. Not all sites were directly connected, so that intermediate nodes have to forward messages. At each node the address of an incoming packet is inspected, and if destined for some other node, forwarded. The use of addresses in every packet simplifies interleaving and forwarding of messages to * nodes that are not directly connected. To keep the network secure from disruption or attack, an important military concern, it was designed so that there would be no centralized control. An unintentional byproduct of this constraint was that the system could grow easily, since no central node had to be periodically upgraded. We will return to this notion from time to time:

Distribution and autonomy of modules provides scalability.
! Refer to this in Mediator chapter, and in UBI HPCC chapter>

By 1972 a total of 37 sites were nodes on the ARPA-net. Fig. Arpanet shows the extent of the network in 19?yy?. To assure reliability of message transmission, all nodes were connected via more than path. Extension of the net meant establishing new nodes, and connecting them to one or more existing nodes. The Internet, which followed the ARPA-net as it moved beyond its defense research orientation, adopted local autonomy as a guiding principle, since by 1985 central control could no longer be envisaged. The Internet also adopted several research networks that were established by other agencies for their researchers, as NSF-net, connecting NSF supercomputer sites, NASA-net, connecting major NASA facilities, and the network of the department of Energy (DoE), connecting its large computer systems. These networks, over time, all accepted the same protocols for addressing and transmission control, making their unification easier.

Figure Arpanet.
The nodes and connections of the ARPA-net in 1974 [Dennett85].

While remote use of computer hardware and software was the original motivation, transfer of files had to be supported as well. Soon a message-sending protocol was provided that permitted workers to exchange information about their projects. This latter service led to * electronic mail (email), now one of the prime motivations for participation in the Internet.

To manage the development of the ARPA-net, and assure consistency among its participants, ARPA proposed a Network Information Center (NIC), and such a center was established in !196x at SRI International, a research organization in Menlo Park, CA, SRI had as its roots as a government research laboratory attached to Stanford University. The NIC logs and distributes * Requests For Comments (RFC) to initiate discussions on concepts and technology to be used for the ARPA-net, and so engaged an ever-widening community in its development. Some RFC eventually take on the weight of standards, and will also be followed by other networks; for instance, the TCP and IP protocols discussed in Section INTRO.Technlogy. Currently, the non-military NIC function is located at Network Solutions Inc. at Herndon, VA. As of December 1993, over 1500 RFCs have been issued. Of course, many older ones have become obsolete, but in total they form an impressive record of cooperative development under non-interfering government sponsorship. This model of cooperation remains a guideline for future directions, as the Internet, the successor of the ARPA-net, becomes the infrastructure for the Electronic Highways.

The success of the ARPA-net created a desire for workers at other, non-ARPA sponsored institutions, to participate. Not being on the ARPA-net, and having to rely on paper and surface mail to communicate with ones' colleagues became a serious disadvantage [Lederberg78]. Today, researchers in many disciplines depend on the Internet receive early notifications of work by others, share in demonstrations without traveling, or solve software problems cooperatively, rather than by leaving messages on telephone answering machines.

As the number of participants increased, the Internet needed some technical reorganizations. One major change decentralized the management of addressing of nodes, further enhancing the scalability of the Internet. Since that time the Internet has operated as a true information infrastructure utility, which is available to its users every day and every hour of the day. unless a local connection is broken.

Initially, the developement of the Internet was primarily sponsored by the National Science Foundation (NSF). NSF funding paid for is coast-to-coast backbone, and regional groups pay for the networks that emanate from this backbone, as shown in Fig. Internet. In 1991 NSF received funding to upgrade the backbone, allowing use by a broader community. The upgrade created the National Research and Education Network (NREN). Regional networks include BARTT for the San Francisco Bay Area, LosNettos for the Los Angeles basin, and for the Boston region. (The name NERDnet for the New England Research and Development network was rejected.) The regional networks support themselves by charging their local nodes, but since the distances are modest, those costs are low as well. This net now has heavy commercial use as well, and in 1996 a new research effort, the New Generation Internet (NGI() was initiated to provide independent, and much higher performing services to universities and research institutions.
Figure Internet: The configuration of the Internet as of ?date?. The lines indicate the backbone, and the ovals represent regional networks (!Missing! get from MedicalInformatics file).

Commercial operation of the Internet

The official birthdate of the Internet is in 1993, when NSF contracted with Network Solutions to be the registrar of domain names terminating in in .com, .net, and .edu. Traffic for the .com and .net sites must support its share of the backbone. (update with information from class, Winter 2000.)

In March 2000 Network Solution was purchased by Verisign for $21B, the highest price ever paid for an Internet company.


The Internet supports three major functions: remote computer use, file transfer, email, and combinations, as used for web access. These functions can be further assembled into higher level services, as discussed in other chapters of this book.


If a user, or an application program, needs the services of a remote computer, a remote login (* rlogin) command must be issued to that computer. A user name and a * password will have to be provided for authentication, so that the remote computer system can verify that the user is authorized to use some or all of its services. Several computers on the Internet, for instance, provide services to solve mathematical equations. If an application on some other computer needs to solve equations beyond its local capabilities, it can invoke the remote system. !!example

Such a service has to be explored before making it part of ones system. The constructor of such a complex application will first use 'rlogin' to browse through the facilities, execute some trial runs, and inspect the results. While doing that, the user will also send queries by email to the remote experts and import and inspect some documentation files using file transfer facilities.

While conceptually remote use by a human or a computer are similar, effective interaction can require markedly different styles. A * 'user-friendly' system will provide a nice screen of choices, permit selection with a pointing device as a mouse, and respond with nice sentences and pictures. A * 'machine-friendly' interface will accept commands in a formally structured and consistent language and respond with codes and sentences or tables that are easy to parse. Fig.\friendly provides examples for the same task. The Structured Query Language (SQL) was standardized in 199 to provide access to databases, the Knowledge Query and Management Language (KQML) is designed to access or update a variety of data types in complex networks.

Figure Friendly. Types of Friendliness: a) a graphical user- friendly interface (), b) a language interface which tries to combine user- and machine friendliness (SQL), c) a machine-friendly interface (KQML)

As systems are enhanced to become more user-friendly they can easily lose *machine friendliness (and vice versa), so that eventually two interfaces may be required. one for people and one for other computer applications. The complexity of serving users with ever-increasing demands and means of interaction can be daunting. Examples of casual querying are called browsing. If the machine-friendly layer is sufficiently powerful it can serve as a base-layer for a variety of user-friendly interfaces. Having one interface simplifies systems greatly that are normally accessed remotely. The user-friendly aspect is then provided locally, and adjustable to local needs and facilities. We discuss the architectural implications in MEDIATORS .


If data, stored at a remote site are needed locally, they can be obtained using the file transfer protocol (ftp). File transfer provides a dual to remote computation: instead of doing the work on remote computer resources, it enables local use of remote data resources. The choice of tradeoff can be driven by economics, convenience, or security.

If you are permitted to * logon both on the sending and on the receiving machine, a file transfer can be set up with few constraints. An rlogin on the remote system gets you going on both and the ftp program lets you send or receive data from either machine. If the machines are of similar design, no conversion needs to take place, and if you maintain similar file structures, no renaming is needed. But where differences exist, the ftp protocols permit a wide range of conversions. The variety of computer designs has become smaller, so that awkward conversions are rarely needed today.

Some files are provided to other sites as a courtesy. The convention is then that you logon to the remote site with the pseudonym 'anonymous', testing your spelling ability. As password you may need to give the string 'guest', or you may be requested to give your email address, so that a simple record of accessors can be kept. Files made available for anonymous login can be easily further distributed; there is currently no technology that can restrict their further use.

!!movie example?

Common operating functions are also provided. For instance, * ftp sites will respond to a 'dir' commnand and provide a listing of the current directory, no matter what the local directory listing command is (it is 'ls' on UNIX systems), to help see user verify the spelling of files being accessed.

A file can contain data or programs. Programs can then be locally executed. If the program representation is compatible with local capabilities it can be locally executed to transform local data. Programs that follow the Java representation are intended to be compatible by most browsers. Whenever a program is imported from a remote site there is some danger that it will inadvertently or intentionally cause local damage. The Java representation is intended to minimize that possibility.

The ability of Internet participants to rapidly send and respond to messages (email) is a major motivation for its use. Although it was not a part of the original motivation for the ARPAnet, soon some adhoc means for internode communication via FTP developed. Early programs for transmitting (SNDMSG) and reading messages shipped over the ARPAnet (READMAIL) were available in 1971. Improved protocols and servic programs appeared. EMAIL was formalized in August 1982 with the Simple Mail Transfer Protocol [RFC 821, SMTP]. Email can also be sent to or from users on quite simple networks (not requiring the remote login and exection procersses that characterized TELNET and FTP), greatly increasing the audience for email. The summary table lists a number of such networks. The three basic functions are: send a message, read a message, and file a message, providing an archival record. Being able to file automatically all outgoing mail, as well as mail received provides an audit trail that is nearly impossible to maintain with written correspondence, at a much lower cost in personnel and paper.

To send a message you must have the destination's address, which consists of a * user or * account name, followed by a regular Internet address, for instance Many institutions follow regular conventions for devising email names, perhaps the last name, or the first initial followed by the last name. If the account name differs, a user may have * aliases conforming to that convention, for instance, as will work. Aliases can also denote one's role, as You can mail one message to multiple receivers, or list people to receive 'carbon copies' (cc). After all addresses a subject field is given, helping the reader to decide what to read first. Finally, the text of the message will follow. Often the text is taken from a pre-existing file, and is simply attached. If you know that the receiver has the appropriate software and hardware, you can also include formatted documents, spreadsheets, graphics, and even voice. If you are not sure, it is best to send simply ascii character text, and count on the value of the contents rather than on the pretty form to make your point. Fancy formats will be discussed in Chapter ENTEDU under 'multimedia'. An example of an email message ready to be sent is given in Fig. Message.

Figure Message. A message ready to be shipped over email.

Messages received for a user are kept in a mailbox queue for perusal, i.e., reading, responding, filing or discarding. To help in referencing them they will be numbered sequentially. New messages that arrive are marked unseen, so they can be rapidly located. Email that has been read can be deleted or archived in storage files to remain rapidly accessible when and if needed. Major documents received with email can be stored separately for further work, editing, or publishing. If one is not disciplined about filing or discarding messages one can easily become overwhelmed when messages in the mailbox exceed some hundreds, and eventually the mailbox can become overloaded. Once one is comfortable with email it is less work to manage email than paper mail. Because email can be read by computer programs, software tools are available to seek items of interest or to discard junkmail, The ability to process email is the prime advantage of email over fax or voicemail.

Secondary operations on email greatly enhance its effectiveness. You can list the headers of all messages in your mail file, or all messages not yet read. Search commands will allow you to locate all messages from your boss, or all messages with 'urgent' in the title, or all messages that refer to 'Swaziland' in the text. A message that was misdirected can be remailed to the right person. Messages that imply work you can delegate can be forwarded with a note to the person assigned. You can also forward your email to a program, such as Z-mail, which will filter out junkmail according to rules you supply, and remail what is left to a private mailbox file.

Most email handlers have a reply option, so that you do not have to enter the senders address, and you may get a free heading as 'In reply to your message of 10:14:30 am EST 23Dec1993,'. If you have to reach many colleagues, you can replace the address with the name of a file containing a mailing list, and multi-cast the message to all a large number of people. The mailing list need not be yours; the responsibility for maintaining a mailing list can be delegated to one member of the group, and all members can simple use the remote mailing list name in lieu of an individuals remote address.

Since email can be sent to sites where ftp protocols fail, email is also used for file transfer. For instance, a program called Service Mail provides software protocols to send modestly-sized files securely via email. No privileges are required to mail messages or files to some destination, but email does not provide retrieval access to remote sites. Fig. INTRO.Nets indicates the types of paths that an email message may traverse.

Email is developing a unique style due to its medium. Since email is inherently rapid and economical, less user effort is to be spent on a message. It is wise to come rapidly to the point. Since the receiver can respond to email within minutes, you also need less explanatory material; a subsequent clarifying interaction is fast. Minor typos are permissible. Since computer screens display fewer lines than a sheet of paper, long salutations, blank space, double spacing, very long lines, indentations, and the like are best minimized. Unfortunately, some vendors naively provide email systems which attempt to mimic formats used for letters, with the results that the recipient has to toggle among multiple screens, and loses continuity.

Some common-sense *netiquette is appropriate when sending material. Broadcasting messages to large mailing lists should be done with restraint. Broadcast messages should also be brief; many recipients have storage space limitations or have to pay for storage. A brief message can inform others how to obtain a longer file from you or from a public *bulletin board.

The rapidity and seeming anonymity of email messages has encouraged a mode of thoughtless and crude communication, *flaming. Flames are rapid reactions to events or other messages or postings on the network. they can be effective feedback, due to the speed of reaction, Thoughtless wording is not reined by delays or personal contact, as in paper mail or phone interaction. It is also unwise to assume that any messages will be private, unless *encryption is used. While insults are never nice, having them re-distributed through out your collegues will not win you points, no matter how clever you are.

Bulletin boards

An effective extension of email are the * bulletin boards that have been established for many topics. A site, with people interested in some domain, can set up a file containing messages that can be appended or read by anyone on the Internet. A bulletin board reduces the need for broadcasting, and avoids annoying recipients who otherwise get flooded with, what is to them, * junkmail. The cost to a bulletin-board maintainer is some storage space, and periodic purging of obsolete material. Some bulletin boards are edited or refreed, adding value to their readers. In the summary table of the introductory chapter on services several bulletin boards, selected out of the thousands that exist now, are listed.

!!Forwarding e-mail for vacation


The most popular service on the Internet is web access. It combines basic services as messages to web sites, to initiate a request, remote computation to process the request, and file transfer to ship the results of the request to the requesting node. Web sites are node that follow protocols so that they can be accessed by standard conventions, the HyperText Transfer Protocol (HTTP), which are obeyed by accessing programs, called browsers. The result can be data of various forms: text, images, voice, video, or programs that perform some local functions, either for dynamic display or for actual processing of local data.

Chapter BROWSING is devoted to the services available in the web and the browser programs that provide those services to local recipients.


!example of schools from [Rheingold93] leading to Collaboratories [Lederberg78].


Communication is achieved by having communication links of adequate capacity (bandwidth) and methods to reliably get data from one node to another node. In this section we focus on the methods, we deal with capacity problems in Chap.\V\T, where we also encounter the applications that demand a high bandwidth.

The transmission methods used by the Internet are codified as the Transmission Control Protocol/Internet Protocol (TCP/IP), a collection of conventions which have been periodically updated and standardized. These protocols specify how computers are to be addressed, how basic messages are to be formatted, and how to convert data among incompatible computers and operating systems.


Addressing refers to the way computer nodes are named. Addresses have to be understood by all participating systems and have to reflect local changes rapidly. The current Internet addressing scheme achieves these conflicting goals by using two concepts: * hierarchical management and * distributed name servers.

The format of an Internet address, as specified by the current Internet Protocol (IP), consists of four parts, separated by a point(.) [RFC 791, Sept.1991]. Internally the parts are represented by numbers, for instance the address of the current NIC is, denoting the regional network, the institution, a subnetwork there, and finally an individual computer node. However, such codes are hard to remember and easy to mistype. To obviate having every user keep lists of these numbers and associate them with meaningful names, Internet sites designate one or several of their local machines as * name-servers, and those name servers maintain tables that associate node name with those numbers. For instance, the NIC itself can be addressed simply as RS.INTERNIC.NET. Each of these four fields can hold 256 values, the value '0' denotes "null". The name servers also keep working tables (caches) for names from remote nodes. If a request for an unknown node name is made, the name server issues a request to the server at the next higher hierarchical level. When the node name is found, the name-server updates its cache with the newly found number- name combination so that next time around the name can be resolved without higher- level assistance. The cache is also updated when an error is reported. Errors will occur when some remote node has changed local number and name associations. To minimize the effect of local changes, many sites use * aliases. For instance. most computer science departments us the local name 'CS', say CS.STANFORD.EDU; and assign that name to whatever computer handles their local mail, even though most researchers and students use their own local workstations. Internet addresses are not case-sensitive, they can be typed in upper- or lower-case.

The symbolic postfixes indicate the class of institution for a node:

More postfixes are being proposed now: .ARTS, .FIRM, .INFO, .SHOP, .REC, .PER and .WEB.

An example of a distinct network was bitnet, a network connecting mainly IBM mainframe users. A typical address to go to a bitnet site is If the secondary address could be confusing, a % can be used to indicate a secondary @ breakpoint, to be parsed at the subnetwork , such as JDoe%minimachine&

There is some concern that the addresses provided by the Internet naming scheme will not suffice in the future. Although the potential 255*255*255*255 = 4,228,250,625 nodes seems more than adequate, not all fields are fully utilized now. A small company may not need many local node names.
There is new proposal for variable fields


The data to be transmitted is placed into * packets according to the Transmission Control Protocol (TCP). Each packet contains a header with the required destination address, <>the sending address, a sequence number, some control information, as well as a modest amount of data. Packets are of variable length. The minimum length is due to the packet header, about 50 bytes, but the Internet Protocol (IP) allows packets up to 64 Kbytes long.

Small data packets do incur a high overhead, but are needed for highly interactive applications. To operate on a remote computer only a single character may be placed in a packet, so that the remote computer can respond to each user keystroke. When the computer responds with some results, many characters can be placed into a packet. To avoid blocking of shared transmission channels, packets will have a maximum size.. When files are to be transferred, packets will be filled up to their maximum capacity, and many packets may be needed. The receiving node then has to reassemble the file from the data in the packets. It must also process the sequence number, since packets may have been moved over different paths and arrive out-of-order. The maximum size of a packet is determined by network speeds, buffering and retransmission concerns, and tends to be less than a thousand characters. Different subnetworks on the Internet may impose different packet sizes, so that * gateway nodes also have to be able to perform packet re- assembly.

For small data packets the * overhead is high since much descriptive data accompanies the user's data characters. Sending single characters is frequent when humans are typing, but then the rate of packet generation is modest. Inefficiency arise when a remote computer program simulates a human user on a terminal, but enters characters much faster than a flesh-and-bone user could. Packets are also used for high speed video and voice transmission, as described in Sect.\I\X\ATM.


Today the Internet backbone is provided by commercial vendors, primarily GT&E, MCI, Sprint, and Worldcom (1998), but open to competetive bidding. These major providers do not charge each other for shipping messages over each other lines, a convention refered to as peering. These peers consider that accounting would be more costly than free sharing. However, peering makes it hard for small providers to enter the backbone business. Smaller providers, regional nets, and ISPs have to pay the backbone service they connect to for long-haul services.

The costs of the backbone are borne in part by the regional networks, from the fees they collect from the ISPs. Some government support continues, and reimburses the backbone providers for services provided to the government and academic institutions.


The backbones terminate at regional centers, where subnets serve the actual Internet service providers (ISPs).


Internet service providers (ISPs) allow customers at home to dial-in to the Internet. They typically levy a monthly fee for basic services.

For many businesses more services will be needed. An active web-site will require a high=capacity connection to the reginal subnets. Some ISPs specialize in business servives. These cahrge more, but also provide constant monitoring of their services, to assure their availability.


The prime means for transmission are long-range optical fiber cables, local analog or digital (ISDN) telephone networks, and routers to connect them together.


Routers link the various types of networks. Routers also take regularly snapshots of the state of the network, and arrange that the packets be forwarded along the best route.


High-speed linkages are being installed by telephone companies and their specialized competitors, mainly using optical fibers. The intense competition among long-distance providers has accelerated the move to optical fiber cable, which has a much higher capacity than the traditional copper wires or radio transmission. These cables cross the country using railway and road right-of-ways. An optical cable consists of a bundle of fibers, each fiber illuminated by a semi-conductor laser at the sending end, and decoded by a semiconductor photo-diode at the other end. A cable may contain 48 or 96 fibers. Each fiber can transmit data at a rate of 622 Megabits per second, or about 300 TV channels, nearly 20 times more than the traditional microwave links, whose towers march across our landscape. An important benefit of light signals is that they are unaffected by magnetic or electrical interference and are not easily tapped.

By transmitting light of a variety of colors or frequencies, the capacity of a fiber can further increased, leading to 10 Gigabit per second capacities for a single fiber.

To exploit the capacity of optical cables new electronics and transmission protocols had to be developed. At the speed of light the *transmission delay is at least 12 milliseconds for going across the U.S., say 4000 km. The capacity of the fiber means that during that time easily successor packets can have been inserted by the sender. It is not feasible to send a small packet and wait for an acknowledgement before sending the next one. Since the lines are inherently quite reliable, * buffering and asynchronous acknowledgement schemes work well.

Asynchronous Transfer Mode (ATM)

High-speed backbone services may provided using the high-speed Asynchronous Transfer Mode (ATM). ATM provides multiple levels of service, with a guaranteed constant transmission rate at the highest level. ATM appears to the users as a conventional circuit.

The Asynchronous Transfer Protocol (ATM) supports switched internetworking, namely high-speed connection of LANs, and is designed to perform well for voice and video transmission, where momnetary delays are not well tolerated. Its packets are called * cells.and consist of a 5-byte header, with an address and sequence number, and 64 bytes of data. For video and speech any cells that are not correctly transmitted are discarded, as described in Chap.\V\T\COMPRESSION.


Many alternatives to the Internet conventions exist. We will briefly list some technologies that are in use on other networks, always comparing them to the Internet facilties so that relative role, benefits, and liabilities are clear. To connect to Internet other networks will use a gateway computer (identified by the postfix .NET on the Internet) and that computer must package the data according to TCP and provide addresses according to the IP.


Some alternate snode naming schemes are available and understood by most Internet sites. A geographiclly oriented Internet note address is <>. The x.500 standard, popular in Europe, reverses the symbolic order and uses explicit labels. The Internet address above becomes <C=us,S=va,L=reston,B=nri,F=info>. (check letter codes!)


A major advance enabling the growth of the ARPA-net was the distribution of address information to local sites. Even before the scheme became hierarchical, the underlying system maintained copies of the node names throughout the network, enabling rapid routing of messages and relieving the senders of having to know or care which road their messages would take. In simpler systems there may actually be a central registry, maintained by a responsible agency, which keeps track of and authorizes all nodes. Such registries exist for instance in IBM's VNETwork and now on the military part of the Internet (MILNET). Without a central registry the message has to contain the names of all intermediate nodes; that scheme is the basis for USENET. A typical USENET address is shown below in Fig. Usenet. Today, many usenet sites provide some directory services to simplify the users' tasks, and also avail themselves of Internet services.

Figure Usenet: Address for a usenet message.

The Internet depends on rapid forwarding of its packets to achieve the interactive operation needed for effective control of remote computing. Each node must forward packets it receives as soon as the packet is complete and verified to be correct. Even then the number of transmissions between nodes (* hops) should be kept modest. For email and file transfer such speed matters less. A store-and-forward protocol is more relaxed about these constraints. It will typically assemble the entire transmission contents at each node, and then send it on when and where convenient. It is not necessary to break transmissions into packets. To gain truly economical transmission a store-and-forward system may use low-cost or idle times to forward messages. For instance, many nodes participating in USENET use * 800-tariff telephone lines at night to forward their material.


Many older mainframe computers used * half-duplex transmission to reduce the frequency of interruptions due to user activities. Also, in half duplex mode the full transmitting capacity of a communication line is used in one direction at a time. When going from the sender terminal to the receiver the sending user can enter many characters, which are transmitted, but not acted upon. Only when a special terminating character is entered, typically \enter, sometimes \escape or \break, does the receiving computer process the message. Then the communication line is 'turned around', the user receives the results from the computer to the end, when the line is again turned around. Since it is easy to unintentionally request much more data from a computer than is needed, systems operating in half-duplex mode typically provide a \break character, which creates a reverse interrupt, often by simulating a short break in the communication network. Faster computers have made the benefits of half-duplex transmission less relevant, but vestiges of this type of operation abound. Transmission lines which can transmit in both directions simultaneously are termed * duplex.


An alternative to packet-switching is to use switched telephone lines in the same mode that voice calls are being made. Dialing a remote computer, and connecting to it via a modem obviates the need to send addresses repeatedly; the connection once made remains until one or the other end hangs up. While this reduces the overhead, it is likely that the phone line is used at a much lower capacity then it is capable of. On a long-distance call the phone company will take advantage of the modest use and multiplex many conversations onto one high-capacity line. The phone companies will also interlace conversations while people stop talking. But modems transmit a continuous * carrier signal which reduces the phone company's capability to interlace computer conversations and increases the reliability of computer transmissions.

Use of switched lines (analog, ISDN, DSL) is the dominant means of connecting from home to one's local computer node. Using long-distance switched lines as a regular computing path uses many resources and is hence prohibitively costly.


When local communication is desired, much greater communication speeds are desired and can be achieved. The telephone wires, designed for voice transmission, i.e., frequencies to about 3000 Hertz, can be replaced by direct cables, creating a * Local Area Network (LAN). Coaxial cables, as used for television signals, have capacities of several mega-Hertz. However, to enable interactive communication, the lines have to be * turned-around rapidly, and if many users are to participate, suitable addressing schemes have to be devised as well. The developers of a LAN at Xerox' Palo Alto Research Center (PARC) , following te precepts of the Internet, wanted to avoid having a central node, but had to avoid interference among users.

At the speeds that such cables can communicate, the delay in telling other participants what is going on becomes the bottleneck. Even the speed of light, which is the limit for electronic signal transmission, becomes a constraint, since traversing 300 meters (1000ft) takes about a microsecond (A millionth of a second). In a microsecond several data characters can be placed on a high-speed line. PARC researchers adapted another innovation of the Aloha-net: * collision detection and backoff. The Aloha-net used a * geostationary satelite south of Hawaii for transmission. Satellite transmission introduces a long delay. The distance from one island to another via satelite becomes about 75 000 km (45 000 miles) and a message takes about 0.25 secs to travel that distance. Two groundstations might start a transmission at about the same time, and by listening as well learn about the conflicting use of the ether 0.25 seconds later, when they find that their own transmission to be intermixed with another one. Listening allows speakers to sense * collisions on the shared information path. When computers speak to each other (rather than people), they can be programmed to remember precisely what they were trying to say, and can repeat the message after a short breather. The length of the breathing times are randomized, so that it is unlikely that both senders start talking again at the same time.

The wire of a LAN behaves like the ether. Electronic pulses also travel through wire at roughly the speed of light. The Xerox * ethernet protocol, Carrier Sense Multiple Access with Collision Detection (CSMA/CD), rather than trying to regulate use of the shared line, permits nodes to use the line any time when no * carrier signal from any other node is detected. The nodes read all the data that pass, but only copy data addressed to them. Ethernet nodes place data on the line without prearranged permissions, which leads to * collisions wheneever two nodes try to use the line within a period that is less than the signal transmission time between them. But the interface circuit in each node also reads the signals on the line, and detects collisions since its own signal is corrupted. At that point nodes stop transmitting, it is useless anyhow, and wait before trying again. So that both nodes don't wait the same amount of time, a randomized doubling scheme is used for their waiting times. The waiting times also gives other nodes a better chance to participate in communications when the line is heavily used [MetcalfeB76]. CSMA/CD is an effective and simple scheme for local networks, and avoids having a central regulator and a critical failure component [ShochH80].

Ethernet messages are also placed into packets, and packets are kept small to avoid excessively long holding times on the ethernet. The ethernet protocols are now a standard (!!xx) and many companies supply suitable equipment. If an ethernet node is also connected to Internet, it can function as a gateway using IP, and world-wide connectivity is provided to all local users on that ethernet network.


As of 1993 about 62% of U.S. households with TV were wired for cable. The quality of the services differs a great deal Some systems in mountainous regions mainly retransmit broadcasy channels, most urban systems provide about 40 additional entertainment, educational, local, and shopping channels. But those services still represent a one-way street. A small fraction of cable systems provide interactive services. Two-way traffic is essential to the vision of the Information Superhighway, enabling personal requests to be fulfilled. Poor versions of two-way traffic are used for services as pay-per-view. To select an advertised film or event one can call the cable service and enter ones * personal identifier (PIN) and a code designating the desired service. But in terms of interaction it is like walking upstream on a cowpath parallel to a raging river. Cable does have enough capacity to allow hundreds of users to pose requests simultanously. But there is little incentive for cable companies to install two-way facilities until the demand exists, and this demand will depend on the quality and functionality of the applicationspresented in the subsequent chapters. (Comments on Cable by Vaughn Pratt)


The use of satelites causes long delays as well as being subject to interruptions. The transmission delay (latency) is much longer, since *geosynchronus satelites hang aproximately 40,000 km above the earth, and their transmissions may be interrupted by weather or aircraft passing betwen them and interchange stations. While technologies as ATM deals well with the delays, it is difficult to recover data lost in transmission.

This alternative for transmission into homes is called * Direct Broadcast Service (DBS). DBS uses multiple * geostationary satellites to broadcast signals to home receivers. A control center on the ground transmits the signals to the satellites. The DBS broadcasts are more powerful, at a higher frequency (12Ghz) and better encoded than earlier satellite transmitters used for studio to studio transmission of commercial television. The receiving antennas (dishes) for DBS are much smaller (45 cm or 18" in diameter) than the dishes used for commercial communication (about 2 m or 6 ') and such dishes are easy to install. The DBS signals are * encrypted, so that only paying subscribers can decode the signals and watch the programs. The services are similar to those of cable, but the centralized nature of the source, one control center in Castle Rock, Colorado, makes local programming infeasible. To allow feedback the subscriber station is equipped with a * modem that can be used to access the DBS studio, or their suppliers. However, it is unlikely that the transmitted services can be personalized.

Research in Japan is developing DBS receivers for use on trains, ships, buses, and even cars. For * mobile use the antennas are flat arrays of cells, electronically steered to maintain contact with the satelite, rather than dishes. For a car the antenna would be about 32 cm (12.5") square, and use only 40 Watts of power [EdPe93]. TV reception should only be a benefit for the driver when stuck in a traffic jam, but passengers might find a view of a screen that provides access to digital highways a pleasant alternative over seeing the walls that are being erected as soundbarriers along many real highways.


A suprising candidate supply alternative to the traditional communication companies, as phone and cable companies for providing the foundation for the digital highways, are the electric power companies. The concepts is to use the existing right-of-ways for power lines, that reach nearly every citizen, and provide a parallel optical cable. The power companies have some experience in using optical cable for long-distance data transmission. They use optical technology for their own communication because of the freedom from interference from their own powerlines. They also have the mechanisms to reach every customer. As a justification they can use the optical cable into homes also to help manage power consumption For instance, it may make sense to control a hot water heater so that it is off or on a low setting during periods of high power consumption, say on hot afternnoons when ait-conditioning units are in full swing. Similarily timming can be provided for heaters and air-conditioning units, and the electric meter itself can be read remotely. Some such services are available in European countries, where energy has been costly. Meters can be read remotely by triggering a wireless message or a pulse on the line itself. It is unclear if central management of home appliances is useful, since local controllers can be quite cheap, and use fairly fancy logic in their chips. It seems more likely that power companies can work with cable systems, and provide right-of-ways and maybe integrated billing services.

The internet managers buy bulk capacity from ommercial services, but these companies also expect to provide tool services directly. The NII intends to encourage commercial alternatives, since it is unlikely that public services can provide all the needed capacity when the highways become popular. It will be crucial however, that smooth interchanges exist between the public and the toll roads, and that the tolls will be such that the benefits of high-quality commercial services will warrant their use.


This chapter presented the features of the coming information highways that are now commonly available, at least to the population for whom information services have long been crucial. For many of these communities, use of email, sharing of data files, and use of computers has become routine. An individual from such a community, when faced with loss of, say, email, will feel isolated and ineffective. Since, especially in the United States, but increasingly in the rest of the world, individuals move frequently among workplaces, companies, and even professions, the pressure has increased to provide information connectivity to anyone, everywhere.

To deal with the increasing demand to be connected several commercial services now that provide access to the Internet, either as a dedicated function or as part of more general computing services. In Chapter Ubiquitous Computing we will deal with the initiatives and the technology which will make acessto information and computing ubiquitous. Subsequent chapters focus on specific application areas and their state and technologies.

The variety of companies and types of service emerging on the Internet ceates confusion for people. Once comfortable with one service type most users will stay with that one service. Reaching out to someone on another or an unknown service is awkward. To overcome that problem * intelligent messaging systems are being proposed by AT\&T, Motorola, and IBM which would take single email message sent to a central point and try to locate the recipient on any of multiple services. One may also try indirection, via designated colleagues or supervisors. Such technology requires intelligence in resolving mismatched names, finding corresponding phone numbers, knowing the codes for many types of voicemail systems, activating beepers, and in general tracking individuals that are traveling on the digital or physical highways. Chapter Mediators describes some of the mediation technology needed to achieve interoperation of diverse and autonomous systems.


Robert Kahn worked on, motivated, and managed the development of the ARPA-net from its inception to 1986. After receiving a PhD in Electrical Engineering from Princeton University in 1964, he joined the faculty at MIT. In 1966 he crossed the street and joined BBN, where he started on the development of the ARPA- net, and in 1972 joined ARPA as program manager. He become director of its Information Processing Technology Office (IPTO). A strong synergy ensued between IPTO's support of Artificial Intelligence (AI) efforts. Major AI centers were developing innovative techniques of using computers, a.o., time sharing, which made character-by- character interaction between humans and their machines essential. These systems enabled many people to share resources, not only computers, but also programs and expertise about those programs. The use of networks, to bring researchers together was natural, since assembling them physically would play havoc with their teaching and familial responsibilities. After leaving ARPA in 1986 Robert Kahn founded the not-for- profit Corporation for National Research Initiatives (CNRI), which supports mainly communication-oriented and digital library research, using both governmental and private funding. In 1994 Robert Kahn received the prestigious Marconi Fellowship, in an impressive ceremony at the Smithsonian Portrait Gallery, with John Gibbons, the President's Science Advisor standing on for Vice-President Gore and Mrs. Marconi Braga, the daughter of the inventor of the telephone describing Bob Kahn's achievements.

CNRI also provides a home for the Internet Society, a group of users who have the goal of further Internet development based on the principle of voluntary cooperation.


The Internet technology has, from a very modest beginning, shown exponential growth. The trend is continuing today (Jan. 2000) [

    Wiederhold, Gio: Trends for the Information Technology Industry; report prepared for MITI under sponsorship of the Japan External Trade Organization (JETRO), San Francisco CA 94104, April 1999.

The US. National Information Infrastructure (NII) program, with its focus on the construction of an information superhighway using fiber optics, is a move to achieve a multimedia society through a four-phased approach: telephone networks (switching the phone grid over to fiber optics), CATV (Community Antenna Television-granting permission for telecommunications operations), computer telecommunications networks (promoting multimedia communications circuits over the Internet), and wireless communications.



We close this chapter by summarizing some of the network services that have been introduced, and add a few others. Completeness in the rapidly growing world of networks and network services cannot be achieved.

. name / type
Communication Networks and Technical Services
sponsor technology charging services [ref] source|
AirOne / mobile service Northwest, Southwest Airlines dial-up network access / in


\dol 2+2/min |
ARDIS / wireless data service IBM, Motorola data only network |
ATDnet / Exp.Network ARPA, with Bellcore and Bell Atlantic High speed net in Washington-Baltimore corridor six sites (NASA, NSA,ARPA) 2.4Gb SONET fiber ring technology / ATM Er(24Gb?)|
BARNET (BART) / regional network managed by Stanford Un. for the San Francisco Bay Area |
bitnet / Edu.Network , a cooperative of computing centers using mainly IBM mainframes TCP/IP free of charge to authorized center users email, [Bill Yount]|
Chicago / microsoft internet access / windows 4 late 1994|
CDPD / wireless data service McCaw Cellular Communications / also Bell Atlantic, GTE later cellular digital packet data / upgrade of cellular phone service services to three cities planned (1994) |
DEIS / MilNet upgrade DISA CII, Unisys / BDN, Boeing, CSC, EDS, Martin Marietta Defense Enterprise Integration Services mediators |
ethernet / LAN techn. XEROX corp?. technology licensed by XEROX corp. for local networks CD purchased by many sites, free to their users email, file sharing, ftp, rlogin [metcalfe, shoch] |
ESNet (Energy Sciences Network) / upgrade DoE and NASA / Sprint 30 sites, ATM, 45Mbps T3 |
FidoNet / telephone forwarding private ang NGOs address is zone:net/node / uses Internet for interzone (=continent) transmission, uunet in zones email, bboards for DOS, others / 20,000 nodes in 1993 [Bush93] /*|
internet / Sc.Network NSFnet (see)and cooperating regional groups free of charge to users at participating institutions email, ftp, rlogin [Kahn]|
Internet-in-a-box Spry Inc,/ Seattle WA low-cost full-capability internet hookup service and software|
K12Net / on FIDONet email, program and project access, requests, language exposure [Murray93] |
NSFnet / Sc.Network NSF NREN / Arlington VA U.S. backbone/ 155Mb free of charge to participating subnets email, ftp, rlogin []|
PANIX / Com.Svce Panix Public Access System/ New York NY Internet access |
PSI / Com.Svce Performance Systems Inc / Reston VA ISDN to internet connection / for high-bandwith (T1, T3) connections |
RACE net / Sc.Network ESPRIT? / brussels European backbone/ email, ftp, rlogin []|
RAM / wireless dta serviceNetwork Bell South, RAM Broadcasting radio-based computer network |
Europa NET / Sc.Network Reseaux Associes pour la Recherche Eurpenne / DANTE RARE Amsterdam European backbone/ 8Mb email, ftp, rlogin []|
Servicemail / Software EITech, Palo Alto, CA package to ship files using basic email facilities free software, but the email services may charge ftp |
SMR / wireless data service Nextel Communications, Motorola specialized mobile radio / upgrade of radio dispatching service LosAngeles |
SprintLink / Com.Network Sprint High-speed lines using ATM technology linked to internet lease 45Mb links |
STA net / Sc.Network MITI STA/ Tokyo Japan Kansai backbone/ email, ftp, rlogin []|
Super-JANET / Sc.Network / Great Britain 34MB backbone/ email, ftp, rlogin []|
Telenet / Com.Network <> <> charges to individual or corporate accounts email, ftp |
Telescript / Software General Magic/ San Jose CA language for mediation agents |
Tymnet / Com.Network McDonnel-Douglas Corp. proprietary over leased lines charges to users rlogin |
usenet / Sc.Network USENIX group Voluntary participation by UNIX sites store and forward free to their users email, ftp |
vnet / Pvt.Network IBM corp. IBM staff, with Internet access for selected people restricted to corporate use, focus on machines using the VM oprating system email, rlogin, ftp |
WAIS / access software |
well / Com.Svce Whole Earth 'Lectronic Link/ San Francisco Internet accounts at local centers/ collaboration grroup support Modest membership fee and usage charge email |
XWAIS / access software version of WAIS for Unix |
Z-mail / Software Z-code software/ San Rafael CA Rule based system for email filtering -- omitting email from people you don't want to encounter |
Some bulletin boards are listed in Network Performance
Network Link Speeds
type transmission speed *| \cr
Ethernet LAN IEEE ... 10 Mb - 100Mb |
Ethernet switched |
Giagabit Ethernet 1 Gb |
SONET 2.4 Gb |
Sonet host interface OC-12 622 Mb |
Sonet host interface OC-3 155 Mb |
T3 circuit 45 Mb |

* Note: speeds are here in bits per second, to get characters or bytes per second, divide by 8, although with compression more may be achieved. base speeds may be given as Baud equal to bits per second for slow speed links.
make into a graph.


Historical snippets about the Internet, assembled by Bill Stewart, can be found at

Previous chapter: Introduction - Next chapter: Ubiquitius Computing
List of all Chapters.
CS99I CS99I home page.


!about 150 000 Internet acounts today for students and schools [Abrahams] << Services in North Carolina Initially the Governor's offce pays for the network (44M), to get experience, next year put it into organizational budget. Use ATM. Avoid gateway orgs. Price by giga cells, distance independent. + access charge per site ($1000 ?(4K?) one time, $400/month). ==$175 per circuit mile. Barrier to junk. Via telephone co, which is already digital throughout NC. REA cooperation for small rural telephone companies.

Additional request of $350 for computers in schools. Should not be funded without training.

Objective better public services by the state for its people. Will connect schools, hospitals, prisons, . . . libraries, with librarians as helpers. maybe small video conf.roomsin each library (in NC joint funded 50%; schools 75% by state). Bookmobiles with multimedia workstations. Not yet homes. Applications wireless field sensors for agriculture, for forest fire protection.

Needs 1. vision 2. fewer nay-sayers. 3.applications 4. applications 5 be realistic, dont expect home today (use libraries) 6. people on continuity 7. make entry low re network charges. 8. fed govmnt should move. 9 interstate ops. and interoperability (ie, doctor in NC being able to serve patient in South Carolina.