INTERNET HISTORY RESEARCH AND DEVELOPMENT – REF  AIREGISTER

 

Analysis Thirty years ago this week the modern internet became operational as the US military flipped the switch on TCP/IP, but the move to the protocol stack was nearly killed at birth.

The deadline was 1 January, 1983: after this, any of the Advanced Research Projects Agency Network’s (ARPANET) 400 hosts that were still clinging to the existing, host-to-host Network Control Protocol (NCP) were to be cut off.

The move to packet switching with TCP/IP was simultaneous and coordinated with the community in the years before 1983. More than 15 government and university institutions from NASA AMES to Harvard University used NCP on ARPANET.

With so many users, though, there was plenty of disagreement. The deadline was ultimately set because everybody using ARPANET was convinced of the need for wholesale change.

TCP/IP was the co-creation of Vint Cerf and Robert Kahn, who published their paper, A Protocol for Packet Network Interconnection 1974.

ARPANET was the wide-area network sponsored by the US Defense Advanced Research Projects Agency (DARPA) that went live in 1969, while Cerf had been an ARPANET scientist at Stanford University. The military had become interested in a common protocol as different networks and systems using different protocols began to hook up to ARPANET and found they couldn’t easily talk to each other.

Cerf, who today is vice-president and “chief internet evangelist” at Google, announced the 30th anniversary of the TCP/IP switchover in an official Google blog post titled “Marking the birth of the modern-day Internet”.

The 1983 deadline’s passing was anticlimactic, Cerf recalls, considering how important TCP/IP became as an enabler for the internet.

Cerf writes

When the day came, it’s fair to say the main emotion was relief, especially amongst those system administrators racing against the clock. There were no grand celebrations—I can’t even find a photograph. The only visible mementos were the “I survived the TCP/IP switchover” pins proudly worn by those who went through the ordeal!

Yet, with hindsight, it’s obvious it was a momentous occasion. On that day, the operational Internet was born. TCP/IP went on to be embraced as an international standard, and now underpins the entire Internet.

It was a significant moment, and without TCP/IP we wouldn’t have the internet as we know it.

But that wasn’t the end of the story, and three years later TCP/IP was in trouble as it suffered from severe congestion to the point of collapse.

TCP/IP had been adopted by the US military in 1980 following successful tests across three separate networks, and when it went live ARPANET was managing 400 nodes.

After the January 1983 switchover, though, so many computer users were starting to connect to ARPANET – and across ARPANET to other networks – that traffic had started to hit bottlenecks. By 1986 there were 28,000 nodes chattering across ARPANET, causing congestion with speeds dropping from 32Kbps to 40bps across relatively small distances.

It fell to TCP/IP contributor Van Jacobson, who’d spotted the slowdown between his lab in Lawrence Berkeley National Laboratory and the University of California at Berkeley — just 400 yards and two IMP hops apart – to save TCP/IP and the operational internet.

Jacobson devised a congestion-avoidance algorithm to lower a computer’s network data transfer speed and settle on a stable but slower connection rather than blindly flooding the network with packets.

The algorithm allowed TCP/IP systems to process lots of requests in a more conservative fashion. The fix was first applied as a client-side patch to PCs by sysadmins and then incorporated into the TCP/IP stack. Jacobson went on to author the Congestion Avoidance and Control (SIGCOMM 88) paper (here) while the internet marched on to about one billion nodes.

And even this is not the end of the story. Years later, in an interview with The Reg, Jacobson reckoned TCP/IP faces another crisis – and, again, it’s scalability.

This time, the problem is millions of users surfing towards the same web destinations for the same content, such as a piece of news or video footage on YouTube. Jacobson, a Xerox PARC research fellow and former Cisco chief scientist, told us in 2010 about his work on Content-Centric Networking, a network architecture to cache content locally to avoid everybody hitting exactly the same servers simultaneously.

***********************************************************************************************************************************************

GOOGLE BLOG WRITE UP

25 years of  Internet doodle from Google
25 years of Internet doodle from Google

Today is the 30th birthday of the modern-day Internet. Five years ago we marked the occasion with a doodle. This year we invited Vint Cerf to tell the story. Vint is widely regarded as one of the fathers of the Internet for his contributions to shaping the Internet’s architecture, including co-designing the TCP/IP protocol. Today he works with Google to promote and protect the Internet. -Ed.

A long time ago, my colleagues and I became part of a great adventure, teamed with a small band of scientists and technologists in the U.S. and elsewhere. For me, it began in 1969, when the potential of packet switching communication was operationally tested in the grand ARPANET experiment by the U.S. Defense Advanced Research Projects Agency (DARPA).

Other kinds of packet switched networks were also pioneered by DARPA, including mobile packet radio and packet satellite, but there was a big problem. There was no common language. Each network had its own communications protocol using different conventions and formatting standards to send and receive packets, so there was no way to transmit anything between networks.

In an attempt to solve this, Robert Kahn and I developed a new computer communication protocol designed specifically to support connection among different packet-switched networks. We called it TCP, short for “Transmission Control Protocol,” and in 1974 we published a paper about it in IEEE Transactions on Communications: “A Protocol for Packet Network Intercommunication.” Later, to better handle the transmission of real-time data, including voice, we split TCP into two parts, one of which we called “Internet Protocol,” or IP for short. The two protocols combined were nicknamed TCP/IP.

TCP/IP was tested across the three types of networks developed by DARPA, and eventually was anointed as their new standard. In 1981, Jon Postel published a transition plan to migrate the 400 hosts of the ARPANET from the older NCP protocol to TCP/IP, including a deadline of January 1, 1983, after which point all hosts not switched would be cut off.


vint1973robertkahnjonpostel

Vint Cerf in 1973, Robert Kahn in the 1970’s, Jon Postel

When the day came, it’s fair to say the main emotion was relief, especially amongst those system administrators racing against the clock. There were no grand celebrations—I can’t even find a photograph. The only visible mementos were the “I survived the TCP/IP switchover” pins proudly worn by those who went through the ordeal!

Yet, with hindsight, it’s obvious it was a momentous occasion. On that day, the operational Internet was born. TCP/IP went on to be embraced as an international standard, and now underpins the entire Internet.

It’s been almost 40 years since Bob and I wrote our paper, and I can assure you while we had high hopes, we did not dare to assume that the Internet would turn into the worldwide platform it’s become. I feel immensely privileged to have played a part and, like any proud parent, have delighted in watching it grow. I continue to do what I can to protect its future. I hope you’ll join me today in raising a toast to the Internet—may it continue to connect us for years to come.

***********************************************************************************************************************************************
Posted by Vint Cerf, VP and Chief Internet Evangelist

The Evolution of Packet Switching

Dr. Lawrence G. Roberts
Member, IEEE

Invited Paper

November 1978

Abstract-Over the past decade data communications has been revolutionized by a radically new technology called packet switching. In 1968 virtually all interactive data communication networks were circuit switched, the same as the telephone network. Circuit switching networks preallocate transmission bandwidth for an entire call or session. However, since interactive data traffic occurs In short bursts 90 percent or more of this bandwidth Is wasted. Thus, as digital electronics became inexpensive enough, it became dramatically more cost-effective to completely redesign communications networks, introducing the concept of packet switching where the transmission bandwidth is dynamically allocated, permitting many users to share the same transmission line previously required for one user. Packet witching has been so successful, not only in improving the economics of data communications but in enhanang reliability and functional flexibility as well, that in 1978 virtually all mew data networks being built throughout the world are based on packet switching. An open question at this time is how long will it take for voice communications to be revolutionized as well by packet switching technology. In order to better understand both the past and future evolution of this fast moving technology, this paper examines in detail the history and trends of packet switching.

THERE HAVE ALWAYS been two fundamental and competing approaches to communications: pre-allocation and dynamic-allocation of transmission bandwidth. The telephone, telex, and TWX networks are circuit-switched systems, where a fixed bandwidth is prcallocated for the duration of a call. Most radio usage also involves preallocation of the spectrum, either permanently or for single call. On the other hand, message, telegraph, and mail systems have historically operated by dynamically allocating bandwidths or space after a message is received, one link at a time, never attempting to schedule bandwidth over the whole source-to–destination path. Before the advent of computers, dynamic-allocation systems were necessarily limited to nonreal time communications, since many manual sorting and routing decisions were required along the path of each message. However, the rapid advances in computer technology over the last two decades have not only removed this limitation but have even made feasible dynamic- allocation communications systems that are superior to preallocation systems in connect time, reliability, economy and flexibility. This new communications technology, called “packet switching,” divides the input flow of information into small segments, or packets, of data which move through the network in a manner similar to the handling of mail but at immensely higher speeds. Although the first packet-switching network was developed and tested less than ten years ago, packet systems already offer substantial economic and performance advantages over conventional systems. This has resulted in rapid worldwide acceptance of packet switching for low-speed interactive data communications networks, both public and private.

A question remains, however. Will dynamic-allocation techniques like packet switching generally replace circuit switching and other pre-allocation techniques for high-speed data and voice communication? The history of packet switching so far indicates that further applications are inevitable. The following examination of the primary technological and economic tradeoffs involved in the growth of the packet switching communications industry should help to trace the development of the technology toward these further applications.

EARLY HISTORY

Packet switching technology was not really an invention, but a reapplication of the basic dynamic-allocation techniques used for over a century by the mail, telegraph, and torn paper tape switching systems. A packet switched network only allocates bandwidth when a block of data is ready to be sent, and only enough for that one block to travel over one network link at a time. Depending on the nature of the data traffic being transferred, the packet-switching approach is 3-100 times more efficient than pre-allocation techniques in reducing the wastage of available transmission bandwidth resources. To do this, packet systems require both processing power and buffer storage resources at each switch in the network for each packet sent. The resulting economic tradeoff is simple: if lines are cheap, use circuit switching; if computing is cheap, use packet switching. Although today this seems obvious, before packet switching had been demonstrated technically and proven economical, the tradeoff was never recognized, let along analyzed.

In the early 1960’s, pre-allocation was so clearly the proven and accepted technique that no communications engineer ever seriously considered reverting to what was considered an obsolete technique, dynamic-allocation. Such techniques had been proven both uneconomic and unresponsive 20-80 years previously, so why reconsider them? The very fact that no great technological breakthrough was required to implement packet switching was another factor weighing against its acceptance by the engineering community. What was required was the total reevaluation of the performance and economics of dynamic-allocation systems, and their application to an entirely different task. Thus, it remained for outsiders to the communications industry, computer professionals, to develop packet switching in response to a problem for which they needed a better answer: communicating data to and from computers.

THE PIONEERS

Rand

The first published description of what we now call packet switching was an 11-volume analysis, On Distributed Communications, prepared by Paul Baran of the Rand Corporation in August 1964 [1]. This study was conducted for the Air Force, and it proposed a fully distributed packet switching system to provide for all military communications, data, and voice. The study also included a totally digital microwave system and integrated encryption capability. The Air Force’s primary goal was to produce a totally survivable system that contained no critical central components. Not only was this goal achieved by Rand’s proposed packet switching system, but even the economics projected were superior, for both voice and data transmissions. Unfortunately, the Air Force took no follow-up action, and the report sat largely ignored for many years until packet switching was rediscovered and applied by others.

ARPA I

Also in the 1962-1964 period, the Advanced Research Projects Agency (ARPA), under the direction of J. C. R. Licklider (currently at M.I.T.), sponsored and substantially furthered the development of time-sharing computer systems. One of Licklider’s strong interests was to link these time-shared computers together through a widespread computer network. Although no actual work was done on the communication system at that time, the discussions and interest Licklider spawned had an important motivating impact on the initiators of the two first actual network projects: Donald Davies and me.

As previously indicated, the development of packet switching was primarily the result of identifying the need for a radically new communications system. Licklider’s strong interest in and perception of the importance of the problem encouraged many people in the computer field to consider it seriously for the first time. it was in good part due to this influence that I decided, in November 1964, that computer networks were an important problem for which a new communications system was required [2]. Evidently Donald Davies of the National Physical Laboratory (NPL) in the United Kingdom had been seized by the same conviction, partially as a result of a seminar he sponsored in autumn 1965, which I attended with many M.I.T. Project MAC people. Thus, the interest in creating a new communications system grew out of the development of time-sharing and Licklider’s special interest in the 1964-1965 period.

National Physical Laboratory

Almost immediately after the 1965 meeting, Donald Davies conceived of the details of a store-and-forward packet switching system, and in a June 1966 description of his proposal coined the term “packet” to describe the 128-byte blocks being moved around inside the network. Davies circulated his proposed network design throughout the U.K. in late 1965 and 1966. It was only after this distribution that he discovered Paul Baran’s 1964 report.

The first open publication of the NPL proposal was in October 1967 at the A.C.M. Symposium in Gatlinburg, TN [3]. In nearly all respects, Davies’ original proposal, developed in late 1965, was similar to the actual networks being built today. His cost analysis showed strong economic advantages for the packet approach, and by all rights, the proposal should have led quickly to a U.K. project. However, the communications world was hard to convince, and for several years, nothing happened in the U.K. on the development of a multi-node packet switching network.

Donald Davies was able, however, to initiate a local network with a single packet switch at the NPL. By 1973 this local network was providing an important distribution service within the laboratory [4], [5]. This project, plus the strong conviction and continued effort by those at NP1. (Davies, Barber, Scantlebury, Wilkinson, and Bartlett), did gradually have an effect on the U.K. and much of Europe.

ARPA II

In January 1967, I joined ARPA and assumed the management of the computer research programs under its sponsorship. ARPA was sponsoring computer research at leading universities and research labs in the U.S. These projects and their computers provided an ideal environment for a pilot network project; consequently, during 1967 the ARPANET was planned to link these computers together.

The plan was published in June 1967. The design consisted of a packet switching network, using minicomputers at each computer site as the packet switches and interfacing device, interconnected by leased lines. By coincidence, the first published document on the ARPANET was also presented at the A.C.M. Symposium in Gatlinburg, TN, in October 1967 [6] along with the NPL plan. The major differences between the designs were the proposed net line speeds, with NPL suggesting 1.5 Mbit/s lines. The resulting discussions were one factor leading to the ARPANET using 50-kbit/s lines, rather than the lower speed lines previously planned [7].

During 1968, a request for proposal was let for the ARPANET packet switching equipment and the operation of the network. The RFP was awarded to Bolt Beranek and Newman, Inc. (BBN) in Cambridge, MA, in January 1969. Significant aspects of the network’s internal operation, such as routing, flow control, software design, and network control were developed by a BBN team consisting of Frank Heart, Robert Kahn, Severo Ornstein, William Crowther, and David Walden [8], [9], [10]. By December 1969, four nodes of the net had been installed and were operating effectively. The network was expanded rapidly thereafter to support 23-host computers by April 1971, 62 hosts by June 1974, and 111 hosts by March 1977.

The ARPANET utilized minicomputers at every node to be served by the network, interconnected in a fully distributed fashion by 50-kbit/s leased lines. Each minicomputer took blocks of data from the computers and terminals connected to it, subdivided them into 128 byte packets, and added a header specifying destination and source addresses; then, based on a dynamically updated routing table, the minicomputer sent the packet over whichever free line was currently the fastest route toward the destination. Upon receiving a packet, the next minicomputer would acknowledge it and repeat the routing process independently. Thus, one important characteristic of the ARPANET was its completely distributed, dynamic routing algorithm on a packet-by-packet basis, based on a continuous evaluation within the network of the least-delay paths, considering both line availability and queue lengths.

The technical and operational success of the ARPANET quickly demonstrated to a generally skeptical world that dynamic-allocation techniques-and packet switching in particular-could be organized to provide an efficient and highly responsive interactive data communications facility. Fears that packets would loop forever and that very large buffer pools would be required were quickly allayed. Since the ARPANET was a public project connecting many major universities and research institutions, the implementation and performance details were widely published [11], [12], [13], [14], [15]. The work of Leonard Kleinrock and associates at UCLA on the theory and measurement of the ARPANET has been of particular importance in providing a firm theoretical and practical understanding of the performance of packet networks. (See “Principles and Lessons in Packet Communications” by L. Kleinrock, in this issue pp. 1320-1329.)

Packet switching was first demonstrated publicly at the first International Conference on Computer Communications (ICCC) in Washington, DC, in October 1972. Robert Kahn of BBN organized the demonstration. He installed a complete ARPANET node at the conference hotel, with about 40 active terminals permitting access to dozens of computers all over the U.S. This public demonstration was for many, if not most, of the ICCC attendees proof that packet switching really worked. It was difficult for many experienced professionals at that time to accept the fact that a collection of computers, wide-hand circuits, and minicomputer switching nodes-pieces of equipment totaling well over a hundred could all function together reliably, but the ARPANET demonstration lasted for three days and clearly displayed its reliable operation in public. The network provided ultra-reliable service to thousands of attendees during the entire length of the conference.

The widespread publicity the ARPANET demonstration earned contributed greatly to the task of introducing modern dynamic-allocation technology to a pre-allocation trained world. However, during the same period in the early 1970’s many other dynamic-allocation techniques were being developed and tested in private networks throughout the world. Hopefully, the extensive publications on the ARPANET have not oversold the particular variety of packet switching used in this first major network experiment.

SITA

The Societe lnternationale de Telecommunications Aeronautiques (SITA) provides telecommunications for the international air carriers. In 1969 SITA began updating its design by replacing the major nodes of its message switching network with High Level Network nodes interconnected with voice-grade lines-organized to act like a packet switching network. Incoming messages are subdivided into 240-byte packets and are stored and forwarded along predetermined routes to the destination. Prestored distributed tables provide for alternate routes in the event of line failures [l6].

TYMNET

Also in 1969, a time sharing service bureau, Tymshare Corporation, started installing a network based on minicomputers to connect asynchronous timesharing terminals to its central computers. The network switches, which are interconnected by voice-grade lines, store and forward from node to node data characters for up to 20 calls packaged in 66-byte blocks. The data is repackaged at each node into new blocks for the next hop. Routing is not distributed, but is accomplished by a central supervisor on a call-by-call basis [17].

CYCLADES/CIGALE

In France the interest in packet switching networks grew quickly during the early 1970’s. In 1973 the first hosts were connected to the CYCLADES network, which links several major computing centers throughout France. The name CYCLADES refers to both the communications subnet and the host computers. The communications subnetwork, called CIGALE, only moves disconnected packets and delivers them in whatever order they arrive without any knowledge or concept of messages, connections or flow control. Called a “datagram” packet facility, this concept has been widely promoted by Louis Pouzin, the designer and organizer of CYCLADES. Since a major part of the organization and control of the network is imbedded in the CYCLADES computers, the subnetwork, CIGALE, is not sufficient by itself. In fact, Pouzin himself speaks of the network as “including” portions of the host computers. The packet assembly and disassembly, sequence numbering, flow control, and virtual connection processing are all done by the host. The CYCLADES structure provides a good testbed for trying out various protocols, as was its intent; but it requires a more cooperative and coordinated set of hosts than is likely to exist in a public environment [18].

RCP

Another packet network experiment was started in France at about the same time by the French PTT Administration. This network, called RCP (Reseau a Commutation par Paquets), first became operational in 1974. By this time the French PTT had already decided to build the public packet network, TRANSPAC, and RCP was utilized primarily as testbed for TRANSPAC. The design of RCP, directed by Remi Despres, differed sharply from that of the other contemporary French network, CYCLADES. Despres’ design was organized around the concept of virtual connections rather than datagrams. RCP’s character as a prototype public network may have been a strong factor in this difference, since a virtual circuit service is more directly marketable, not requiring substantial modifications to customers’ host computers. In any case, the RCP design pioneered the incorporation of individually flow-controlled virtual circuits into the basic packet switching network organization [19].

EIN

Organized in 1971 and originally known as the COST II Project and later as the European Informatics Network (EIN) is a multination-funded European research network. The project director is Derek Barber of NPL, one of the original investigators of packet switching in the U.K. Given freedom from the red tape of multinational funding, this project would have been one of the earliest pace-setters in packet networks in the world. As it happened, however, EIN was not operational until 1976 [20], [21].

Public Data Networks

The early packet networks were all private networks built to demonstrate the technology and to serve a restricted population of users. Besides those early networks already mentioned, which were the most public projects, many private corporations and service bureaus built their own private networks. Generally these private networks did not make provision for host computers at more than one location, and thus their organization usually developed into a star network.

All these networks were the result of a basic economic transition, which occurred in 1969 [22] when the cost of dynamic-allocation switching fell below that of transmission lines. This change made it economically advantageous to build a network of some kind rather than to continue to use direct lines or the circuit switched telephone network for interactive data communications. Universal regulatory conditions in all countries restricted “common carriage” to the government or government-approved carriers, and thereby led to the development of many private networks instead of a competitive market of public networks.

However, the extensive private network activity in the early 1970’s encouraged some of these public carriers to make plans for building their own packet networks, although all public networks and plans for future networks were based on pre-allocation techniques until about 1973. Many plans to provide public data service arose; some were even under way, like the German EDS system; but all were based on circuit switching until that time. The shift in economics in the late 1960’s that made packet switching more cost-effective, instigated more rapid change in communications technology than had ever before occurred.

The established carriers and PTT’s took their time reacting to this new technology. The United Kingdom was the first country to announce a public packet network through the British Post Office’s planned Experimental Packet Switched Service (EPSS) [23]. Donald Davies’ 1966 briefings with the BPO on packet switching clearly played a strong role in the U.K.’s early commitment to this new technology.

In the United States the dominant carrier, American Telephone and Telegraph (AT&T), evidenced even less interest in packet switching than many of the PTT’s in other countries. AT&T and its research organization, Bell Laboratories, have never to my knowledge published any research on packet switching. ARPA approached AT&T in the early 1970’s to see if AT&T would be interested in taking over the ARPANET and offering a public packet switched service, but AT&T declined. However, the Federal Communications Commission (FCC), which regulates all communications carriers in the U.S., was in the process of opening up portions of the communications market to competition. Bolt Beranek and Newman, the primary contractor for the ARPANET, felt strongly that a public packet switched data communications was needed. The FCC’s new policies encouraged competition, so BBN formed Telenet Communications Corporation in late 1972. In October 1973 Telenet filed its request with the FCC for approval to become a carrier and to construct a public packet switched network; six months later the FCC approved Telenet’s request. (See “Legal, Commercial, and International Aspects of Packet Communications,” by S. L. Mathison in this issue, pp. 1527-1539.)

In France in November 1973, the French PTT announced its plans to build TRANSPAC, a major domestic packet network patterned after RCP [24]. The next year, in October 1974, the Trans-Canada Telephone System announced DATAPAC, a public packet network in Canada [25]. Also during this period, the Nippon Telegraph and Telephone Corporation announced its plans to build a public packet switched data network in Japan [26].

Thus, only four years after the building of the first experimental networks, the concept of data communications networks began to move into the public arena. Still, the networks were only planned and had yet to be built; most PTT’s and carriers adopted a wait and watch attitude toward these first public networks.

INTERNATIONAL STANDARDIZATION AND ACCEPTANCE

CCITT X.25

With five independent public packet networks under construction in the 1974-1975 period, there was strong incentive for the nations to agree on a standard user interface to the networks so that host computers would not have unique interfacing jobs in each country. Unlike most standards activities, where there is almost no incentive to compromise and agree, carriers in separate countries can only benefit from the adoption of a standard since it facilitates network interconnection and permits easier user attachment. To this end the parties concerned undertook a major effort, to agree on the host-network interface during 1975. The result was an agreed protocol, CCITT Recommendation X.25, adopted in March 1976.

The X.25 protocol provides for the interleaving of data blocks for up to 4095 virtual circuits (VC’s) on a single full-duplex leased line interface to the network, including all procedures for call setup and disconnection. A significant feature of this interface, from the carriers’ point of view, is the inclusion of independent flow control on each VC; the flow control enables the network (and the user) to protect itself from congestion and overflow under all circumstances without having to slow down or stop more than one call at a time. In networks like the ARPANET and CYCLADES which do not have this capability, the network must depend on the host (or other networks in interconnect cases) to insure that no user submits more data to the network than the network can handle or deliver. The only defense the network has without individual VC flow control is to shut off the entire host (or internet) interface. This, of course, can be disastrous to the other users communicating with the offending host or network.

Another critical aspect of X.25, not present in the proposals for a datagram interface, is that X.25 defines interface standards for both the host-to-network block transfer and the control of individual VC’s. In datagram networks the VC interface is situated in the host computer; there can be, therefore, no network-enforced standard for labeling, sequencing, and flow controlling VC’s. These networks are in the author’s opinion, not salable as a public service since they must offer individual terminal interfaces, as well as host interfaces, to provide complete host-terminal communications; to sell these interfaces requires knowing how to interface to one VC as well as to a host.

The March 1976 agreement on X.25 and on virtual circuits as the agreed technique for public packet networks marked the beginning of the second phase of packet switching: large interconnected public service networks. In the two years since X.25 was adopted, many additional standards have been agreed on as well, all patterned around X.25. For example, X.28 has been adopted as the standard asynchronous terminal interface; X.29, a protocol used with X.25 to specify the packetizing rules for the terminal handler, will be the host control protocol. More recently X.75, the standard protocol for connecting international networks has been defined.

Public Data Network Services

Capitalizing on BBN’s ARPANET experience, TELENET introduced the first public packet network service in August 1975. Initially TELENET consisted of seven multiply interconnected nodes. By April 1978 the network had grown to 187 network nodes which used 79 packet switches to provide 156 U.S. cities with local dial service to 180 host computers across the country, with interconnections to 14 other countries. Originally TELENET supported a virtual connection host interface similar to X.25. However, shortly after the specification was adopted, X.25 was introduced into TELENET as the preferred host interface protocol.

In early 1977 both EPSS in the U.K. and DATAPAC in Canada were declared operational. Also, in the U.S., TYMNET was approved as a carrier and began supplying public data services. EPSS, having been designed long before X.25 was specified, is not X.25 compatible, but the U.K. intends to provide X.25 based packet service within the next year.

DATAPAC was X.25 based from the start of commercial service since the development was held until X.25 was approved. Using X.25 lines, DATAPAC and TELENET were interconnected in early 1978. This connection demonstrated the ease of international network linking, once a common standard had been established.

In France, TRANSPAC is due to become operational later this year (1978); in Japan, the NTT packet network, DX-2, should become operational in 1978 or 1979. A semipublic network, EURONET, sponsored by nine European Common Market Countries, is due to become operational in late 1978 or 1979. Many other European countries, like Germany and Belgium, are making plans for public packet networks to start to 1979. These networks are all X.25 based and therefore should be similar and compatible.

Datagrams versus VC’s

As part of the continuing evolution of packet switching, controversial issues are sure to arise. Even with universal adoption of X.25 and the virtual circuit approach by public networks throughout the world, there is currently a vocal group of users requesting a datagram standard. The two major benefits claimed for datagrams are reliability and efficiency for transaction-type applications.

Reliability: It is claimed that datagrams provide more reliable access to a host when two or more access lines are used, since any packet could take either route if a line were to fail. This reflects a true deficiency in X.25 as currently defined- the absence of a reconnect facility on the call request packet. If, when a call is initiated, a code number for the call is placed in the call request packet, the X.25 network (or host) can save the code number. If the line over which the call is placed fails, the network simply places a new call request, marked as a reconnect, over another line and supplies the original code number to insure reconnection to the correct VC. Since packets on each VC are sequence numbered, this reconnection can be accomplished with no data loss and usually just as quickly as rerouting of the packets in a datagram interface. If the network uses VC’s internally, the same reconnect capability is used to insure against connection failures.

Cost: It is often assumed that datagrams would be cheaper for networks to provide than packets on VC’s. However, the cost of memory and switching have fallen by a factor of 30 compared to transmission costs over the last nine years resulting in the overhead of datagram headers becoming a major cost factor. A datagram packet or end-to-end acknowledgment requires about 25 bytes of packet header in addition to the actual data (0-128 bytes) whereas only 8 bytes of overhead are required for similar packets on a virtual circuit. In the unique case of a single packet call, the overheads are the same. For all longer calls, datagram overhead adds 13-94 percent to the cost of all transmission costs, both long haul and local. Originally, this increase in transmission cost was more than offset by increased switch costs but with modern microprocessor switches very little of the increase is offset. Thus, with this radical shift in economics, datagram packets are now more expensive than VC packets.

CONTINUING TECHNOLOGICAL CHANGE

A decade ago computers had barely become inexpensive enough to make packet switching economically feasible; computers were still slow and small, forcing implementers to invent all sorts of techniques to save buffers and minimize CPU time. Computer technology has progressed to the point where microcomputer systems have now been especially designed for packet switching, and there is no shortage of memory or CPU power. This development has been partially responsible for the shift from datagrams to virtual connections and has also eliminated buffer allocation techniques (which cost transmission bandwidth to save memory). The modularity and computational power of today’s microprocessors has made it economical and practical to provide protocol conversion from X.25 to any existing terminal protocol, polled or not.

As a result of these improvements, packet networks are rapidly becoming universal translators, connecting everything to everything else and supplying the speed, code, and protocol conversions wherever necessary. As this trend continues, it is almost certain that the techniques in use today will have to be continually changed to respond to the changing economics and usage patterns.

For example, one major change that will be required in the next few years is an increase in the backbone trunk speed from 56 kbit/s to 1.544 Mbit/s (the speed of “T1” digital trunks). Both Paul Baran and Donald Davies in their original papers anticipated the use of TI trunks, but present traffic demand has not yet justified their use. As the traffic does justify T1 trunks, many aspects of network design will change by a corresponding order of magnitude. Packet networks have always incorporated a delay in the 100-200 ma range. This delay has so strongly affected both the system design and the choice of applications that it is hard to remember which decisions depended on this delay factor.

With T1 carrier trunks, the transit delay in the net will drop to around 10 ms plus propagation time requiring a complete reexamination of network topology and processor design issues. The number of outstanding packets on a 2500 mile trunk will increase from around 3 to 75, requiring extended numbering, and perhaps, new acknowledgment techniques. The user will be most strongly affected by a 10-30 ms net de- lay; his whole strategy of job organization may change.

Of course, there will be a significant price decrease accompanying this change. This, combined with the short delay, will make many new applications attractive; remote job entry (RJE) and bulk data transfer applications through public packet networks will probably be economically and technically feasible, even before T1 trunks are introduced; but if not before, certainly afterwards, when the packet price reflects the new trunk speed. Dynamic-allocation permits savings over pre-allocation by a factor of four in line costs for RJE, and by a factor of two for bulk data transfer. As the switch cost continues to fall far more rapidly than the line cost, dynamic-allocation techniques will be used for RJE, batch transfer and even voice applications.

FUTURE

Packet Satellite

One change which will clearly occur in packet networks in the next decade is the incorporation of broadcast satellite facilities. ARPA has sponsored extensive research into packet satellite techniques and, over the past few years, has tested these techniques between the U.S. and England. (See “General Purpose Packet Satellite Networks” by I. M. Jacobs, R. Binder, and E. Hoversten in this issue, pp. 1448-1467.) Fundamentally, a satellite provides a broadcast media which, if properly used, can provide considerable gains in the full statistical utilization of the satellite’s capacity. Using ARPA’s techniques, a single wideband channel (1.5 Mbit/s-60 Mbit/s) on a satellite provides an extremely economical way to interconnect high bandwidth nodes within a packet network.

With the current cost of ground stations ($150K-$300K), it appears to be marginally economic to install separate private ground stations at major nodes of a domestic packet network rather than to lease portions of commercial ground stations and trunk the data to the packet network nodes. However, either way, the cost of ground station facilities are such that the use of satellites only becomes economic compared to land lines when the aggregate data flow exceeds about 100 packets/s (100 kbit/s) to and from a node or city. Furthermore, satellite transmission has an inherent one-way delay of 270 ms; therefore, the packet traffic must logically be divided between two priority groups-interactive and batch. Only batch traffic can presently be considered for satellites, since the 270 ms delay is unacceptable for interactive applications, at least if any other options are available, even at a somewhat higher price. With current economics, the long-haul land line facilities only add about $0.50/hr to the price of interactive data calls, which is far too little a cost to encourage the acceptance of slower service. Therefore, interactive service will almost always require ground line facilities in addition to satellite facilities at all network nodes.

This introduces another factor that limits the potential satellite traffic: land lines can easily carry 10-25 percent batch traffic at a lower priority, using a dual queue, without any significant increase in cost Further, if ground lines are required and satellite facilities are optional, the full cost for the satellite capability, must be compared with the incremental cost of simply expanding the land line facilities. All these factors considered, it is probable that satellites will be used by public data network’s within the next five years for transmissions between major nodal points, but that ground facilities will be used exclusively for transmissions between smaller nodes.

Packet Radio

Since local distribution is by far the most expensive portion of a communications network, ground radio techniques are of considerable interest to the extent that they can replace wire for local distribution. Packet radio is another area where ARPA has been sponsoring research in applying dynamic-allocation techniques. The basic concept in packet radio is to share one wide bandwidth channel among many stations, each of which only transmits in short bursts when it has real data to send. (See “Advances in Packet Radio Technology” by R. E. Kahn et al., in this issue, pp. 1468-1496.) This technique appears to be extremely promising for both fixed and mobile local distribution, once the cost of the transceivers has been reduced by, perhaps, a factor of ten. Considering the historical trend of the cost of electronics, this should take about five years; from that point onward packet radio should become increasingly competitive with wire, cable, and even light fibers for low to moderate volume local distribution requirements.

One important consequence would be the use of a simple packet radio system inside buildings to permit wireless communication for all sorts of devices. Clearly, as electronic devices multiply throughout the home and office, low-power packet radio would permit all these devices to communicate among themselves and with similar devices throughout the world via a master station tied into a public data network.

Voice

The economic advantage of dynamic-allocation over pre-allocation will soon become so fundamental and clear in all areas of communications, including voice, that it is not hard to project the same radical transition of technology will occur in voice communications as has occurred in data communications.

Digitized voice, no matter what the digitization rate, can be compressed by a factor of three or more by packet switching since in normal conversation each speaker is only speaking one third of the time. Since interactive data traffic typically can be compressed by a factor of 15, voice clearly benefits tat less from packet switching than interactive data. This is the reason why packet switching was first applied to data communications. However, modern electronics is quickly eliminating any cost difference between packet switches and circuit switches, and thus packet switching can clearly provide a factor of three cost reduction in the transmission costs associated with switched voice service.

Probably there will be many proposals, and even systems built, using some form of dynamic-allocation other than packet switching during the period of transition. The most likely variant design would be a packetized voice system that does not utilize sum checks or flow control. Of course, this would be just a packet switch with those options disabled. If the similarity to present packet switching were not recognized, the packetized voice system might be built without providing these essential capabilities and would be useless for data traffic. However, the obvious solution would be an integrated packet switching network that provides both voice and data services.

On further consideration, it becomes apparent that the flow control feature of packet switching networks can provide a substantial cost reduction for voice systems. Flow control feedback, applied to the voice digitizers decreases their output rate when the network line becomes momentarily overloaded; as a result, peak channel capacity required by users can be significantly reduced.

In short, packet switching seems ideally suited to both voice and data transmissions. The transition to packet switching for the public data network has taken a decade, and still is not complete; many PTT’s and carriers have not accepted its viability. Given the huge fixed investment in voice equipment in place today, the transition to voice switching may be considerably slower and more difficult. There is no way, however, to stop it from happening.

REFERENCES

  1. P. Baran et al., “On distributed communications, vols. I-XI. ” RAND Corporation Research Documents. Aug. 1964.
  2. T. Marill and L. G. Roberts, “Toward a cooperative network of time-shared computers,” Proc. FJCC pp. 425-431, 1966.
  3. D. W. Davies, K. A. Bartlett, R. A. Scantlebury, and P. T. Wilkinson, “A digital communications network for computers giving rapid response at remote terminals,” ACM Symp. Operating Systems Problems, Oct. 1967.
  4. R. A. Scantlebury, P. T. Wilkinson, and K. A. Bartlett, “The design of a message switching Centre for a digital communication network,” in Proc. IFIP Congress 1968, vol. 2-Hardware Applications,” pp. 723-733.
  5. R. A. Scantlebury, “A model for the local area of a data communication network -Objectives and hardware organization,” ACM Symp. Data Communication, Pine Mountain, Oct. 1969.
  6. L. G. Roberts, “Multiple computer networks and intercomputer communication, ” ACM Symp. Operating System Principles, Oct. 1967.
  7. L. G. Roberts and B. D. Wessler, “Computer network development to achieve resource sharing,” Proc. SJCC 1970. pp. 543-549.
  8. F. E. Heart, R. E. Kahn, S. M. Ornstein, W. R. Crowther, and D. C. Walden, “The interface message processor for the ARPA computer network,” in AFIPS Conf. Proc. 36, pp. 551-567, June 1970.
  9. R. E. Kahn and W. R. Crowther, “Flow control in a resource-sharing computer network,” in Proc. Second ACM/IEEE Symp. Problems, Palo Alto, CA, pp. 108-116, Oct. 1971.
  10. S. M. Ornstein, F. E. Heart, W. R. Crowther, S. B. Russell, H. K. Rising, and A. Michel, “The terminal IMP for the ARPA computer network,” In AFIPS Conf. Proc. 40, pp. 243-254, June 1972.
  11. H. Frank. R. E. Kahn, and L. Kleinrock, “Computer communications network design-Experience with theory and practice,” in AFIPS Conf. Proc., vol. 40, pp. 255-270, June 1972.
  12. L. Kleinrock, “Performance models and measurement of the ARPA computer networks,” In Proc. Int. Symp. Design and Application of Interactive Computer Systems, Brunel University. Uxbridge, England, May 1972.
  13. L. Kleinrock and W. Naylor, “On measured behavior of the ARPA network,” in AFIPS Conf. Proc., vol. 43, NCC, Chicago, IL, pp. 767-780, May 1974.
  14. L. Kleinrock and H. Opderbeck, “Throughput in the ARPANET- Protocols and measurement,” in Proc. 4th Data Communications Symp., Quebec City. Canada, pp. 6-1-6-11, Oct. 1975. –
  15. H. Frank and W. Chou, “Topological optimization of computer Networks. Proc. IEEE, vol. 60, no. 11, pp. 1385-1397, Nov. 1972.
  16. G. J. Chretien, W. M. Konig, J. H. Rech, “The SITA network,” in Computer Communication Networks, R. L. Grimsdale, F. F. Kuo, Eds. Noordhoff: NATO Advanced Study Institute Series, pp. 373-396, 1975.
  17. L. R. Tymes, “TYMNET-A terminal oriented communication network,” Proc. AFIPS, SJCC, pp. 211-216, May 1971.
  18. L. Pouzin, “Presentation and major design aspects of the CYCLADES network,” in Third Data Communications Symp., Tampa, FL, pp. 80-85, Nov. 1973.
  19. R. F. Despres, “A packet network with graceful saturated operation,” in Proc. ICCC, Washington, DC, pp. 345-351, Oct, 1972.
  20. D. L. A. Barber, “The European computer network project,” Proc. ICCC, Washington, DC. pp. 192-200, Oct. 1972.
  21. -, “A European informatics network: Achievement and prospects,” Proc. ICCC, Toronto, Canada, pp. 44-50, Aug. 1976.
  22. L. G. Roberts, “Data by the packet,” IEEE Spectrum, Feb. 1974.
  23. R. C. Belton, J. R. Thomas, “The UKPO packet switching experiment,” ISS Munich, 1974.
  24. A. Danet, R. Depres, A. LesRest, G. Pichon, S. Ritzenthaler, “The French public packet switching service: The TRANSPAC network,” Proc. ICCC, Toronto, Canada, pp. 251-260, Aug. 1976.
  25. W. W. Clipsham, F. E. Glave, M. L. Narraway, “DATAPAC network overview,” Proc. ICCC, Toronto, Canada, pp. 131-136, Aug. 1976.
  26. R. Nakamura, F. Ishino, M. Sasakoka, M. Nakamura, “Some design aspects of a public packet switched network,” Proc ICCC, Toronto, Canada, pp. 317-322, Aug. 1976.

Home || Contact Dr. Roberts

Copyright © 2001 Dr. Lawrence G. Roberts

Contact webmaster

)))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))

Network Working Group                                          J. Postel

Request for Comments: 801                                            ISI

November 1981

NCP/TCP TRANSITION PLAN

Introduction

————

ARPA sponsored research on computer networks led to the development

of the ARPANET.  The installation of the ARPANET began in September

1969, and regular operational use was underway by 1971.  The ARPANET

has been an operational service for at least 10 years.  Even while it

has provided a reliable service in support of a variety of computer

research activities, it has itself been a subject of continuing

research, and has evolved significantly during that time.

In the past several years ARPA has sponsored additional research on

computer networks, principally networks based on different underlying

communication techniques, in particular, digital packet broadcast

radio and satellite networks.  Also, in the ARPA community there has

been significant work on local networks.

It was clear from the start of this research on other networks that

the base host-to-host protocol used in the ARPANET was inadequate for

use in these networks.  In 1973 work was initiated on a host-to-host

protocol for use across all these networks.  The result of this long

effort is the Internet Protocol (IP) and the Transmission Control

Protocol (TCP).

These protocols allow all hosts in the interconnected set of these

networks to share a common interprocess communication environment.

The collection of interconnected networks is called the ARPA Internet

(sometimes called the “Catenet”).

The Department of Defense has recently adopted the internet concept

and the IP and TCP protocols in particular as DoD wide standards for

all DoD packet networks, and will be transitioning to this

architecture over the next several years.  All new DoD packet

networks will be using these protocols exclusively.

The time has come to put these protocols into use in the operational

ARPANET, and extend the logical connectivity of the ARPANET hosts to

include hosts in other networks participating in the ARPA Internet.

As with all new systems, there will be some aspects which are not as

robust and efficient as we would like (just as with the initial

ARPANET).  But with your help, these problems can be solved and we

Postel                                                          [Page 1]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

can move into an environment with significantly broader communication

services.

Discussion

———-

The implementation of IP/TCP on several hosts has already been

completed, and the use of some services is underway.  It is urgent

that the implementation of of IP/TCP be begun on all other ARPANET

hosts as soon as possible and no later than 1 January 1982 in any

case.  Any new host connected to the ARPANET should only implement

IP/TCP and TCP-based services.  Several important implementation

issues are discussed in the last section of this memo.

Because all hosts can not be converted to TCP simultaneously, and

some will implement only IP/TCP, it will be necessary to provide

temporarily for communication between NCP-only hosts and TCP-only

hosts.  To do this certain hosts which implement both NCP and IP/TCP

will be designated as relay hosts.  These relay hosts will support

Telnet, FTP, and Mail services on both NCP and TCP.  These relay

services will be provided  beginning in November 1981, and will be

fully in place in January 1982.

Initially there will be many NCP-only hosts and a few TCP-only hosts,

and the load on the relay hosts will be relatively light.  As time

goes by, and the conversion progresses, there will be more TCP

capable hosts, and fewer NCP-only hosts, plus new TCP-only hosts.

But, presumably most hosts that are now NCP-only will implement

IP/TCP in addition to their NCP and become “dual protocol” hosts.

So, while the load on the relay hosts will rise, it will not be a

substantial portion of the total traffic.

The next section expands on this plan, and the following section

gives some milestones in the transition process.  The last section

lists the key documents describing the new protocols and services.

Appendices present scenarios for use of the relay services.

The General Plan

—————-

The goal is to make a complete switch over from the NCP to IP/TCP by

1 January 1983.

It is the task of each host organization to implement IP/TCP for

its own hosts.  This implementation task must begin by

1 January 1982.

Postel                                                          [Page 2]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

IP:

This is specified in RFCs 791 and 792.  Implementations exist

for several machines and operating systems.  (See Appendix D.)

TCP:

This is specified in RFC793.  Implementations exist for several

machines and operating systems.  (See Appendix D.)

It is not enough to implement the IP/TCP protocols, the principal

services must be available on this IP/TCP base as well.  The

principal services are: Telnet, File Transfer, and Mail.

It is the task of each host organization to implement the

principal services for its own hosts.  These implementation tasks

must begin by 1 January 1982.

Telnet:

This is specified in RFC 764.  It is very similar to the Telnet

used with the NCP.  The primary differences are that the ICP is

eliminated, and the NCP Interrupt is replaced with the TCP

Urgent.

FTP:

This is specified in RFC 765.  It is very similar to the FTP

used with the NCP.  The primary differences are that in

addition to the changes for Telnet, that the data channel is

limited to 8-bit bytes so FTP features to use other

transmission byte sizes are eliminated.

Mail:

This is specified in RFC 788.  Mail is separated completely

from FTP and handled by a distinct server.  The procedure is

similar in concept to the old FTP/NCP mail procedure, but is

very different in detail, and supports additional functions —

especially mail relaying, and multi-recipient delivery.

Beyond providing the principal services in the new environment, there

must be provision for interworking between the new environment and

the old environment between now and January 1983.

For Telnet, there will be provided one or more relay hosts.  A

Telnet relay host will implement both the NCP and TCP environments

and both user and server Telnet in both environments.  Users

requiring Telnet service between hosts in different environments

Postel                                                          [Page 3]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

will first connect to a Telnet relay host and then connect to the

destination host.  (See Appendix A.)

For FTP, there will be provided one or more relay hosts.  An FTP

relay host will implement both the NCP and TCP environments, both

user and server Telnet, and both user and server FTP in both

environments.  Users requiring FTP service between hosts in

different environments will first connect via Telnet to an FTP

relay host, then use FTP to move the file from the file donor host

to the FTP relay host, and finally use FTP to move the file from

the FTP relay host to the file acceptor host.  (See Appendix B.)

For Mail, hosts will implement the new Simple Mail Transfer

Protocol (SMTP) described in RFC 788.  The SMTP procedure provides

for relaying mail among several protocol environments.  For

TCP-only hosts, using SMTP will be sufficient.  For NCP-only hosts

that have not been modified to use SMTP, the special syntax

“user.host@forwarder” may be used to relay mail via one or more

special forwarding host.  Several mail relay hosts will relay mail

via SMTP procedures between the NCP and TCP environments, and at

least one special forwarding host will be provided.  (See

Appendix C.)

Milestones

———-

First Internet Service                                        already

A few hosts are TCP-capable and use TCP-based services.

First TCP-only Host                                           already

The first TCP-only host begins use of TCP-based services.

Telnet and FTP Relay Service                                  already

Special relay accounts are available to qualified users with a

demonstrated need for the Telnet or FTP relay service.

Ad Hoc Mail Relay Service                                     already

An ad hoc mail relay service using the prototype MTP (RFC 780) is

implemented and mail is relayed from the TCP-only hosts to

NCP-only hosts, but not vice versa.  This service will be replaced

by the SMTP service.

Last NCP Conversion Begins                                     Jan 82

The last NCP-only host begins conversion to TCP.

Postel                                                          [Page 4]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Mail Relay Service                                             Jan 82

The SMTP (RFC 788) mail service begins to operate and at least one

mail relay host is operational, and at least one special forwarder

is operational to provide NCP-only host to TCP-only host mail

connectivity.

Normal Internet Service                                        Jul 82

Most hosts are TCP-capable and use TCP-based services.

Last NCP Conversion Completed                                  Nov 82

The last NCP-only host completes conversion to TCP.

Full Internet Service                                          Jan 83

All hosts are TCP-capable and use TCP-based services.  NCP is

removed from service, relay services end, all services are

TCP-based.

Documents

———

The following RFCs document the protocols to be implemented in the

new IP/TCP environment:

IP                                                         RFC 791

ICMP                                                       RFC 792

TCP                                                        RFC 793

Telnet                                                     RFC 764

FTP                                                        RFC 765

SMTP                                                       RFC 788

Name Server                                                IEN 116

Assigned Numbers                                           RFC 790

These and associated documents are to be published in a notebook, and

other information useful to implementers is to be gathered.  These

documents will be made available on the following schedule:

Internet Protocol Handbook                                  Jan 82

Implementers Hints                                          Jan 82

SDC IP/TCP Specifications                                   Jan 82

Expanded Host Table                                         Jan 82

Postel                                                          [Page 5]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Implementation Issues

———————

There are several implementation issues that need attention, and

there are some associated facilities with these protocols that are

not necessarily obvious.  Some of these may need to be upgraded or

redesigned to work with the new protocols.

Name Tables

Most hosts have a table for converting character string names of

hosts to numeric addresses.  There are two effects of this

transition that may impact a host’s table of host names: (1) there

will be many more names, and (2) there may be a need to note the

protocol capability of each host (SMTP/TCP, SMTP/NCP, FTP/NCP,

etc.).

Some hosts have kept this table in the operating system address

space to provide for fast translation using a system call.  This

may not be practical in the future.

There may be applications that could take alternate actions if      they could easily determine if a remote host supported a

particular protocol.  It might be useful to extend host name  tables to note which protocols are supported.

It might be necessary for the host name table to contain names of

hosts reachable only via relays if this name table is used to

verify the spelling of host names in application programs such as

mail composition programs.

It might be advantageous to do away with the host name table and

use a Name Server instead, or to keep a relatively small table as

a cache of recently used host names.

A format, distribution, and update procedure for the expanded host

table will be published soon.

Mail Programs

It may be possible to move to the new SMTP mail procedures by

changing only the mailer-daemon and implementing the SMTP-server,

but in some hosts there may be a need to make some small changes

to some or all of the mail composition programs.

There may be a need to allow users to identify relay hosts for

messages they send.  This may require a new command or address

syntax not now currently allowed.

Postel                                                          [Page 6]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

IP/TCP

Continuing use of IP and TCP will lead to a better understanding

of the performance characteristics and parameters.  Implementers

should expect to make small changes from time to time to improve

performance.

Shortcuts

There are some very tempting shortcuts in the implementation of IP

and TCP.  DO NOT BE TEMPTED!  Others have and they have been

caught!  Some deficiencies with past implementations that must be

remedied and are not allowed in the future are the following:

IP problems:

Some IP implementations did not verify the IP header

checksum.

Some IP implementations did not implement fragment

reassembly.

Some IP implementations used static and limited routing

information, and did not make use of the ICMP redirect

message information.

Some IP implementations did not process options.

Some IP implementations did not report errors they detected

in a useful way.

TCP problems:

Some TCP implementations did not verify the TCP checksum.

Some TCP implementations did not reorder segments.

Some TCP implementations did not protect against silly

window syndrome.

Some TCP implementations did not report errors they detected

in a useful way.

Some TCP implementations did not process options.

Host problems:

Some hosts had limited or static name tables.

Postel                                                          [Page 7]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Relay Service

The provision of relay services has started.  There are two

concerns about the relay service: (1) reliability, and (2) load.

The reliability is a concern because relaying puts another host in

the chain of things that have to all work at the same time to get

the job done.  It is desirable to provide alternate relay hosts if

possible.  This seems quite feasible for mail, but it may be a bit

sticky for Telnet and FTP due to the need for access control of

the login accounts.

The load is a potential problem, since an overloaded relay host

will lead to unhappy users.  This is another reason to provide a

number of relay hosts, to divide the load and provide better

service.

A Digression on the Numbers

How bad could it be, this relay load?  Essentially any “dual

protocol” host takes itself out of the game (i.e., does not need

relay services). Let us postulate that the number of NCP-only

hosts times the number of TCP-only hosts is a measure of the relay

load.

Total Hosts  Dual Hosts  NCP Hosts  TCP Hosts  “Load”    Date

200          20        178          2        356     Jan-82

210          40        158         12       1896     Mar-82

220          60        135         25       3375     May-82

225          95         90         40       3600     Jul-82

230         100         85         45       3825     Sep-82

240         125         55         60       3300     Nov-82

245         155         20         70       1400     Dec-82

250         170          0         80          0  31-Dec-82

250           0          0        250          0   1-Jan-83

This assumes that most NCP-only hosts (but not all) will become to

dual protocol hosts, and that 50 new host will show up over the

course of the year, and all the new hosts are TCP-only.

If the initial 200 hosts immediately split into 100 NCP-only and

100 TCP-only then the “load” would be 10,000, so the fact that

most of the hosts will be dual protocol hosts helps considerably.

This load measure (NCP hosts times TCP hosts) may over state the

load significantly.

Please note that this digression is rather speculative!

Postel                                                          [Page 8]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Gateways

There must be continuing development of the internet gateways.

The following items need attention:

Congestion Control via ICMP

Gateways use connected networks intelligently

Gateways have adequate buffers

Gateways have fault isolation instrumentation

Note that the work in progress on the existing gateways will

provide the capability to deal with many of these issues early in

1982.  Work is also underway to provide improved capability

gateways based on new hardware late in 1982.

Postel                                                          [Page 9]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

APPENDIX A.  Telnet Relay Scenario

Suppose a user at a TCP-only host wishes to use the interactive

services of an NCP-only service host.

1)  Use the local user Telnet program to connect via Telnet/TCP to

the RELAY host.

2)  Login on the RELAY host using a special account for the relay

service.

3)  Use the user Telnet on the RELAY host to connect via

Telnet/NCP to the service host.  Since both Telnet/TCP and

Telnet/NCP are available on the RELAY host the user must

select which is to be used in this step.

4)  Login on the service host using the regular account.

+———+          +———+          +———+

|         |  Telnet  |         |  Telnet  |         |

| Local   |<——–>|  Relay  |<——–>| Service |

|  Host   |   TCP    |   Host  |   NCP    |   Host  |

+———+          +———+          +———+

Suppose a user at a NCP-only host wishes to use the interactive

services of an TCP-only service host.

1)  Use the local user Telnet program to connect via Telnet/NCP to

the RELAY host.

2)  Login on the RELAY host using a special account for the relay

service.

3)  Use the user Telnet on the RELAY host to connect via

Telnet/NCP to the service host.  Since both Telnet/TCP and

Telnet/NCP are available on the RELAY host the user must

select which is to be used in this step.

4)  Login on the service host using the regular account.

+———+          +———+          +———+

|         |  Telnet  |         |  Telnet  |         |

| Local   |<——–>|  Relay  |<——–>| Service |

|  Host   |   NCP    |   Host  |   TCP    |   Host  |

+———+          +———+          +———+

Postel                                                         [Page 10]

RFC 801                                                    November 1981                                                 NCP/TCP Transition Plan

APPENDIX B.  FTP Relay Scenario

Suppose a user at a TCP-only host wishes copy a file from a NCP-only

donor host.

Phase 1:

1)  Use the local user Telnet program to connect via Telnet/TCP

to the RELAY host.

2)  Login on the RELAY host using a special account for the

relay service.

3)  Use the user FTP on the RELAY host to connect via FTP/NCP

to the donor host.

4)  FTP login on the donor host using the regular account.

5)  Copy the file from the donor host to the RELAY host.

6)  End the FTP session, and disconnect from the donor host.

7)  Logout of the RELAY host, close the Telnet/TCP connection,

and quit Telnet on the local host.

+———+          +———+          +———+

|         |  Telnet  |         |   FTP    |         |

| Local   |<——–>|  Relay  |<——–>| Service |

|  Host   |   TCP    |   Host  |   NCP    |   Host  |

+———+          +———+          +———+

Postel                                                         [Page 11]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Phase 2:

1)  Use the local user FTP to connect via FTP/TCP to the RELAY

host.

2)  FTP login on the RELAY host using the special account for

the relay service.

3)  Copy the file from the RELAY host to the local host, and

delete the file from the RELAY host.

4)  End the FTP session, and disconnect from the RELAY host.

+———+          +———+

|         |   FTP    |         |

| Local   |<——–>|  Relay  |

|  Host   |   TCP    |   Host  |

+———+          +———+

Note that the relay host may have a policy of deleting files more

than a few hours or days old.

Postel                                                         [Page 12]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

APPENDIX C.  Mail Relay Scenario

Suppose a user on a TCP-only host wishes to send a message to a user

on an NCP-only host which has implemented SMTP.

1)  Use the local mail composition program to prepare the message.

Address the message to the recipient at his or her host.  Tell

the composition program to queue the message.

2)  The background mailer-daemon finds the queued message.  It

checks the destination host name in a table to find the

internet address.  Instead it finds that the destination host

is a NCP-only host.  The mailer-daemon then checks a list of

mail RELAY hosts and selects one.  It send the message to the

selected mail RELAY host using the SMTP procedure.

3)  The mail RELAY host accepts the message for relaying.  It

checks the destination host name and discovers that it is a

NCP-only host which has implemented SMTP.  The mail RELAY host

then sends the message to the destination using the SMTP/NCP

procedure.

+———+          +———+          +———+

|         |   SMTP   |         |   SMTP   |         |

| Source  |<——–>|  Relay  |<——–>|  Dest.  |

|  Host   |   TCP    |   Host  |   NCP    |   Host  |

+———+          +———+          +———+

Postel                                                         [Page 13]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Suppose a user on a TCP-only host wishes to send a message to a user

on an NCP-only non-SMTP host.

1)  Use the local mail composition program to prepare the message.

Address the message to the recipient at his or her host.  Tell

the composition program to queue the message.

2)  The background mailer-daemon finds the queued message.  It

checks the destination host name in a table to find the

internet address.  Instead it finds that the destination host

is a NCP-only host.  The mailer-daemon then checks a list of

mail RELAY hosts and selects one.  It send the message to the

selected mail RELAY host using the SMTP procedure.

3)  The mail RELAY host accepts the message for relaying.  It

checks the destination host name and discovers that it is a

NCP-only non-SMTP host.  The mail RELAY host then sends the

message to the destination using the old FTP/NCP mail

procedure.

+———+          +———+          +———+

|         |   SMTP   |         |   FTP    |         |

| Source  |<——–>|  Relay  |<——–>|  Dest.  |

|  Host   |   TCP    |   Host  |   NCP    |   Host  |

+———+          +———+          +———+

Postel                                                         [Page 14]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

Suppose a user on a NCP-only non-SMTP host wishes to send a message

to a user on an TCP-only host.  Suppose the destination user is

“Smith” and the host is “ABC-X”.

1)  Use the local mail composition program to prepare the message.

Address the message to “Smith.ABC-X@FORWARDER”.  Tell the

composition program to queue the message.

2)  The background mailer-daemon finds my queued message.  It

sends the message to host FORWARDER using the old FTP/NCP mail

procedure.

3)  The special forwarder host converts the “user name” supplied

by the FTP/NCP mail procedure (in the MAIL or MLFL command) to

“Smith@ABC-X” (in the SMTP RCTP command) and queues the

message to be processed by the SMTP mailer-daemon program on

this same host.  No conversion of the mailbox addresses in

made in thr message header or body.

4)  The SMTP mailer-daemon program on the forwarder host finds

this queued message and checks the destination host name in a

table to find the internet address.  It finds the destination

address and send the mail using the SMTP procedure.

+———+          +———+          +———+

|         |   FTP    |         |   SMTP   |         |

| Source  |<——–>|Forwarder|<——–>|  Dest.  |

|  Host   |   NCP    |   Host  |   TCP    |   Host  |

+———+          +———+          +———+

Postel                                                         [Page 15]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

APPENDIX D.  IP/TCP Implementation Status

Please note that the information in this section may become quickly

dated.  Current information on the status of IP and TCP

implementations can be obtained from the file

<INTERNET-NOTEBOOK>TCP-IP-STATUS.TXT on ISIF.

BBN C70 UNIX

Date:  18 Nov 1981

From:  Rob Gurwitz <gurwitz at BBN-RSM>

The C/70 processor is a BBN-designed system with a native

instruction set oriented toward executing the C language.  It

supports UNIX Version 7 and provides for user processes with a

20-bit address space.  The TCP/IP implementation for the C/70 was

ported from the BBN VAX TCP/IP, and shares all of its features.

This version of TCP/IP is running experimentally at BBN, but is

still under development.  Performance tuning is underway, to make

it more compatible with the C/70’s memory management system.

BBN GATEWAYS

Date:  19 Nov 1981

From:  Alan Sheltzer <sheltzer at BBN-UNIX>

In an effort to provide improved service in the gateways

controlled by BBN, a new gateway implementation written in

macro-11 instead of BCPL is being developed.  The macro-11 gateway

will provide users with internet service that is functionally

equivalent to that provided by the current BCPL gateways with some

performance improvements.

ARPANET/SATNET gateway at BBN (10.3.0.40),

ARPANET/SATNET gateway at NDRE (10.3.0.41),

Comsat DCN Net/SATNET gateway at COMSAT (4.0.0.39),

SATNET/UCL Net/RSRE Net gateway at UCL (4.0.0.60),

PR Net/RCC Net gateway at BBN (3.0.0.62),

PR Net/ARPANET gateways at SRI (10.3.0.51, 10.1.0.51),

PR Net/ARPANET gateway at Ft. Bragg (10.0.0.38).

Postel                                                         [Page 16]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

BBN H316 and C/30 TAC

Date:  18 November 1981

From:  Bob Hinden <Hinden@BBN-UNIX>

The Terminal Access Controller (TAC) is user Telnet host that

supports TCP/IP and NCP host to host protocols.  It runs in 32K

H-316 and 64K C/30 computers.  It supports up to 63 terminal

ports.  It connects to a network via an 1822 host interface.

For more information on the TAC’s design, see IEN-166.

BBN HP-3000

Date:  14 May 1981

From:  Jack Sax <sax@BBN-UNIX>

The HP3000 TCP code is in its final testing stages.  The code

includes under the MPE IV operating system as a special high

priority process.  It is not a part of the operating system kernel

because MPE IV has no kernel.  The protocol process includes TCP,

IP, 1822 and a new protocol called HDH which allows 1822 messages

to be sent over HDLC links.  The protocol process has about 8k

bytes of code and at least 20k bytes of data depending on the

number of buffers allocated.

In addition to the TCP the HP3000 has user and server TELNET as

well as user FTP.  A server FTP may be added later.

A complete description of the implementation software can be found

in IEN-167.

BBN PDP-11 UNIX

Date:  14 May 1981

From:  Jack Haverty <haverty@BBN-UNIX>

This TCP implementation was written in C.  It runs as a user

process in version 6 UNIX, with modifications added by BBN for

network access.  It supports user and server Telnet.

This implementation was done under contract to DCEC.  It is

installed currently on several PDP-11/70s and PDP-11/44s.  Contact

Ed Cain at DCEC <cain@EDN-UNIX> for details of further

development.

Postel                                                         [Page 17]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

BBN TENEX & TOPS20

Date:  23 Nov 1981

From:  Charles Lynn <CLynn@BBNA>

TCP4 and IP4 are available for use with the TENEX operating system

running on a Digital KA10 processor with BBN pager.  TCP4 and IP4

are also available as part of TOPS20 Release 3A and Release 4 for

the Digital KL10 and KL20 processors.

Above the IP layer, there are two Internet protocols within the

monitor itself (TCP4 and GGP).  In addition up to eight (actually

a monitor assembly parameter) protocols may be implemented by

user-mode programs via the “Internet User Queue” interface. The

GGP or Gateway-Gateway Protocol is used to receive advice from

Internet Gateways in order to control message flow.  The GGP code

is in the process of being changed and the ICMP protocol is being

added.

TCP4 is the other monitor-supplied protocol and it has two types

of connections — normal data connections and “TCP Virtual

Terminal” (TVT) connections.  The former are used for bulk data

transfers while the latter provide terminal access for remote

terminals.

Note that TVTs use the standard (“New”) TELNET protocol.  This is

identical to that used on the ARPANET with NCP and in fact, is

largely implemented by the same code.

Performance improvements, support for the new address formats, and

User and Server FTP processes above the TCP layer are under

development.

BBN VAX UNIX

Date:  18 Nov 1981

From:  Rob Gurwitz <gurwitz at BBN-RSM>

The VAX TCP/IP implementation is written in C for Berkeley 4.1BSD

UNIX, and runs in the UNIX kernel.  It has been run on VAX 11/780s

and 750s at several sites, and is due to be generally available in

early 1982.

The implementation conforms to the TCP and IP specifications (RFC

791, 793).  The implementation supports the new extended internet

address formats, and both GGP and ICMP.  It also supports multiple

network access protocols and device drivers.  Aside from ARPANET

1822 and the ACC LH/DH-11 driver, experimental drivers have also

been developed for ETHERNET.  There are user interfaces for

Postel                                                         [Page 18]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

accessing the IP and local network access layers independent of

the TCP.

Higher level protocol services include user and server TELNET,

MTP, and FTP, implemented as user level programs.  There are also

tools available for monitoring and recording network traffic for

debugging purposes.

Continuing development includes performance enhancements.  The

implementation is described in IEN-168.

COMSAT

Date:  30 Apr 1980

From:  Dave Mills <Mills@ISIE>

The TCP/IP implementation here runs in an LSI-11 with a homegrown

operating system compatible in most respects to RT-11. Besides the

TCP/IP levels the system includes many of the common high-level

protocols used in the ARPANET community, such as TELNET, FTP and

XNET.

DCEC PDP-11 UNIX

Date:  23 Nov 1981

From:  Ed Cain <cain@EDN-UNIX>

This TCP/IP/ICMP implementation runs as a user process in version

6 UNIX, with modifications obtained from BBN for network access.

IP reassembles fragments into datagrams, but has no separate IP

user interface.  TCP supports user and server Telnet, echo,

discard, internet mail, and a file transfer service. ICMP

generates replies to Echo Requests, and sends Source-Quench when

reassembly buffers are full.

Hardware – PDP-11/70 and PDP-11/45 running UNIX version 6, with

BBN IPC additions.  Software – written in C, requiring 25K

instruction space, 20K data space.  Supports 10 connections.

Postel                                                         [Page 19]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

DTI VAX

Date:  15 May 1981

From:  Gary Grossman <grg@DTI)>

Digital Technology Incorporated (DTI) IP/TCP for VAX/VMS

The following describes the IP and TCP implementation that DTI

plans to begin marketing in 4th Quarter 1981 as part of its

VAX/VMS network software package.

Hardware:  VAX-11/780 or /750.  Operating System:  DEC standard

VAX/VMS Release 2.0 and above.  Implementation Language:   Mostly

C, with some MACRO.  Connections supported:  Maximum of 64.

User level protocols available:  TELNET, FTP, and MTP will be

available. (The NFE version uses AUTODIN II protocols.)

MIT MULTICS

Date:  13 May 1981

From:  Dave Clark <Clark@MIT-Multics>

Multics TCP/IP is implemented in PL/1 for the HISI 68/80. It has

been in experimental operation for about 18 months; it can be

distributed informally as soon as certain modifications to the

system are released by Honeywell.  The TCP and IP package are

currently being tuned for performance, especially high throughput

data transfer.

Higher level services include user and server telnet, and a full

function MTP mail forwarding package.

The TCP and IP contain good logging and debugging facilities,

which have proved useful in the checkout of other implementations.

Please contact us for further information.

SRI LSI-11

Date:  15 May 1981

From:  Jim Mathis <mathis.tscb@Sri-Unix>

The IP/TCP implementation for the Packet Radio terminal interface

unit is intended to run on an LSI-11 under the MOS real-time

operating system.  The TCP is written in MACRO-11 assembler

language.  The IP is currently written in assembler language; but

is being converted into C. There are no plans to convert the TCP

from assembler into C.

Postel                                                         [Page 20]

RFC 801                                                    November 1981

NCP/TCP Transition Plan

The TCP implements the full specification.  The TCP appears to be

functionally compatible with all other major implementations.  In

particular, it is used on a daily basis to provide communications

between users on the Ft. Bragg PRNET and ISID on the ARPANET.

The IP implementation is reasonably complete, providing

fragmentation and reassembly; routing to the first gateway; and a

complete host-side GGP process.

A measurement collection mechanism is currently under development

to collect TCP and IP statistics and deliver them to a measurement

host for data reduction.

UCLA IBM

Date:  13 May 1981

From:  Bob Braden <Braden@ISIA>

Hardware:  IBM 360 or 370, with a “Santa Barbara” interface to the

IMP.

Operating System:  OS/MVS with ACF/VTAM.  An OS/MVT version is

also available.  The UCLA NCP operates as a user job, with its own

internal multiprogramming and resource management mechanisms.

Implementation Language:  BAL (IBM’s macro assembly language)

User-Level Protocols Available:  User and Server Telnet

Internet Statistics
______________________________________________

From the time I started the ARPANET in 1969, I have been keeping track of the traffic. The traffic has grown by about a trillion to one over the intervening 40 years. For the first 20 years the Internet backbone remained based on the ARPANET Interface Message Processors (IMPs). Then as the Internet went commercial in 1991, many new makes of routers were used, although they have continued to use the same basic structure.
The community of users also grew rapidly, but they never had to move off the network. The underlying packet network is mostly invisible to the users so that even as the protocol changed from NCP to TCP/IP in 1983 and the equipment changed from time to time, the user community just continued to grow without interruption. Many of us have now been using the Internet without interuption for 40 years.
For the period from 1969 to 1990 the government, either DARPA or NSF, kept accurate records of the network traffic. Then, when many different commericial organizations interconnected to form the Internet backbone in the 1990’s, there were no accurate records since each company kept their own traffic confidential. Thus, from 2000 to 2001, I undertook to determine the total traffic by collecting traffic data under NDA’s with the 20 major Internet backbone providers. This provided sufficient data so that the intervening years traffic could be interperlated. Then, more recently, most every country in the world has been reporting their traffic so that the traffic in the in between years could again be interperlated. This is how the data below was collected and put together. The volume data is in petabytes per month where a petabyte per month is equal to 1,000 terabytes per month. Also listed is the speed data which is in Gigabits per second at peak hour. The relationship between Gigabits/second and petabytes/month varies over the history of the network as the useage pattern spread out as people worked at night and in many time zones.

________________________________________________________________________________________________________________________________

INTERNET INDIA HISTORY

Internet in India is  still 10 years back  - No Indian Internet
Internet in India is still 10 years back – No Indian Internet

INTERNET AND INDIA  – HISTORY

INET CONFERENCE IN KULA LUMPUR – ICAAN FORMATION DAYS OR DOMAIN NAMING SYSTEM EXPANSION jon postal  Vint cerf father of InternetDR .VINT CERF AND KSRAJU – THE TRUE GURU OF ME

INDIA’S 1ST ISOC  CHARTER ANDHRA PRADESH IN EARLY 2000

india 1st charter in 90'sMr Joel maloff ( internet veteran , VOIP expert my early guru, mentor on VOIP ),

Mr Chandra Babu naidu ( When he was CM  – visionary on  vision 2020, Hitec City) ,

Mr K.S.Raju  – Internet evangelist , Technologist , Next generation Internet Architect – The net will think and Act , Inter intelligent process net  , H -commerce , human Knowledge commerce net work.

—————————————————————————————————————————————————————————————————————————–

Internet Traffic in PetaBytes/month and in peak Gigabits/second
Collected by L. Roberts
Copyright 2009 L.G. Roberts

Year

PB/month

Peak Gbps

1970

2.569E-09

3.303E-08

1971

4.952E-09

7.330E-08

1972

9.176E-09

1.336E-07

1973

1.457E-08

2.195E-07

1974

2.111E-08

3.235E-07

1975

2.911E-08

4.510E-07

1976

3.886E-08

6.062E-07

1977

5.066E-08

8.038E-07

1978

7.049E-08

1.090E-06

1979

9.466E-08

1.475E-06

1980

1.240E-07

1.942E-06

1981

1.657E-07

2.600E-06

1982

2.233E-07

3.959E-06

1983

6.527E-07

9.087E-06

1984

1.453E-06

2.171E-05

1985

3.401E-06

5.292E-05

1986

1.079E-05

1.687E-04

1987

4.916E-05

6.922E-04

1988

1.772E-04

2.573E-03

1989

6.150E-04

8.388E-03

1990

0.001479

0.02181

1991

0.003564

0.05190

1992

0.008162

0.1184

1993

0.01825

0.2645

1994

0.04201

0.6128

1995

0.1192

1.496

1996

0.3638

3.804

1997

1.177

9.779

1998

3.962

24.26

1999

13.86

63.69

2000

44.95

198.7

2001

139.4

526.8

2002

278.2

1,089

2003

536.0

2,108

2004

1,052

4,070

2005

2,023

7,725

2006

3,342

12,761

2007

4,957

18,928

2008

7,468

28,516

2009

10,598

       40,468
Advertisements