Chapter 3

Technical and Economic Foundations of Hypercommunication Networks

"In network economics, more brings more." [Kelly, 1994]

Chapter 2 explained why hypercommunications exist economically by tracing the origins of the information economy and reviewing the new economic thinking that has resulted. Chapter 3 explains why hypercommunications exist technically by tracing the origins of communication networks and reviews the network economics that resulted. Chapter 2 showed that technical and economic components jointly shaped the economic foundation of hypercommunications. Similarly, Chapter 3 shows that technical and economic components jointly shape the technical foundation of hypercommunications.

The 1996 comments of Richard J. Shultz provide purpose and organization to Chapter 3, the second why chapter. Shultz details two stages of what he calls "engineering economics" that preceded the era of a "new economics" of telecommunications:

The first, which could be called "classical engineering economics", was an engineer's delight. This was an era of system-wide planning of telephone networks with an overriding emphasis on systemic integrity. The little differentiation that existed was in very broad categories, such as rural-urban, residential-business, and was used primarily for pricing rather than costing and engineering purposes.

The underlying premise was the concept of natural monopoly which was both an article of faith and, to continue the metaphor, was assumed to be the product of immaculate conception. This was the era of producer sovereignty, where the idea of a telecommunications market was somewhat of an oxymoron. As far as customers were concerned, they were not customers in any meaningful sense. Rather, they were "subscribers" who were not active participants but passive subjects who were serviced, provided for, by the telephone company which had a monopoly on what became an essential service.

The second stage of "engineering economics" covers approximately the last 25 years. In this period, telephone companies were challenged by political authorities--elected, departmental, and regulatory. But they did not question or challenge, for the most part, the fundamental precepts of classical engineering economics based on the idea of natural monopoly. Rather, they sought to supplant telco engineers as planners with social and political engineering. Such engineering was more akin to manipulation and logrolling. . . .

The "new economics" of Canadian telecommunications could not be more profoundly different from "engineering economics" in either form. To invoke a current cliché, the "new economics" represents a paradigm shift, not simply adjustments at the margin. In the first place, the concept of natural monopoly has lost virtually all meaning and relevance to contemporary telecommunications. . . .

Secondly the concept of a single, integrated telephone system has been "blown away", replaced by an abundance of market niches, dissolving boundaries, and a concentration on interoperability and interconnectivity.

The third major characteristic of the "new economics" is the downgrading of the status of the corporate telecommunication's engineer, at least in terms of the profession's classical domination of the sector. The marketing specialist has emerged as a driving force, not only in the development of individual services, but far more importantly in shaping and determining the very nature of the telecommunications firm.

All this reflects the most profound change, the collapse of both producer, and its erstwhile rival and successor, political sovereignty. . . . Once subject and serviced, the customer has been empowered and is now full citizen and in the driver's seat. For the first time, the "new economics" of telecommunications is the economics of consumer sovereignty. Telecommunications will, henceforth, be customer-driven and controlled. [Shultz, 1996, pp. 35-37]

This chapter traces the technical evolution of communication networks through both stages of classical engineering economics into the new telecommunications economics spoken of by Shultz. After a short introduction, section 3.2 defines the term digital and provides a conceptual definition of digital signals and sources. Then, section 3.3 covers technical characteristics of the PSTN. Engineering economics finds additional expression as section 3.4 covers some technical characteristics of computer networks. In section 3.5, six economic generations of computer networks are described, while section 3.6 discusses the final stage of the engineering perspective by covering how Operations Research (OR) helps the hypercommunications network gain a form. Network economics is based on these technical foundations. A literature review of the new economics of the network is given in 3.7. The chapter concludes with a short summary.

3.1 Introduction

Powerful network effects are one source of the dual myths of unlimited communications and an unlimited economic frontier. After introducing the economics of networks and considering the role played by network economics in establishing and denying the dual myths, section 3.1 will briefly consider the ongoing folklore of unlimited communications.

Network effects (often called externalities or synergies) are part of a new vocabulary of information economy terms accompanied by new economic thought. The new thought ranges from the almost unlimited "new economy" [Kelly, 1998] to the new, but limited "weightless economy" [Kwah, 1996, 1997; Coyle, 1997; Cameron, 1998] and, finally, to the limited, and well known "network economy" [Shapiro and Varian, 1998]. While there is a debate about what to call the information economy, there is general agreement that networks are playing an important role in creating a new economic model, or at least in revising the old one. Networks help speed information flow, promote innovation, and generate markets that behave differently from those in the industrial economy.

The debate about the modern economic vocabulary stems from differences of opinion concerning economic theory and network effects [DeLong, 1998, p. 3]. Authors such as Kelly [1998] argue that economists are baffled by a new economic order rooted in the distinctive economic logic of networks. However, microeconomists like Shapiro and Varian claim that a new economics is not needed to understand technological change since economic laws do not change [Shapiro and Varian, 1998]. From a macroeconomic viewpoint, weightlessness or dematerialization naturally occurs as the physical economy grows into a virtual, network economy as production and consumption shift away from atoms and molecules and towards bits and bytes [Kwah, 1996].

Networks are important to the evolving U.S. economy, whether it is called new, network, or weightless. Nortel Corporation (a major manufacturer of hypercommunication hardware) reports on the enormous economic scope of networks in 1996:

Businesses are rushing to embrace this new paradigm. Every four minutes another network is added to the world. Every 4/10 ths of a second another user comes online. Businesses spent over $725 billion on information technologies in 1996. . . . Last year they spent nearly $29 billion on network hardware according to J.P. Morgan. . . . In three years, it estimates this may rise to $72 billion. [Nortel, 1996, p. 3]

If hypercommunication demand arises from characteristics of the information economy (as this "why" chapter argues), networks are responsible for many of those characteristics. The networked hypercommunications model provides users with communication possibilities that are exponentially different from the older interpersonal and mass communications models. Recent dramatic growth in business data and voice networks has introduced popular definitions of network that do not always match the meaning of the term in the economics literature.

3.2 Definitions

Before beginning a discussion of the analog-digital distinction, which is critical to defining rural hypercommunications infrastructure, the International System of Units (SI) must be introduced. These prefixes and units form the weights and measures of hypercommunications. According to the FCC in 1996, "The SI is constructed from seven base units for independent physical quantities." The SI is a part of the metric system that has been adopted by the United States and most other countries. There are seven base units, a set of prefixes of magnitude, and a number of SI derived units that are useful in hypercommunications.

Table 3-1 shows the prefixes commonly used to show magnitude. This table is important in two ways. First, technological change in computer speeds, for example, is pushing the bottom of the table (the fractional portion) further down, to ever-smaller units. In 1980, computer time units were counted in milliseconds and microseconds, but now nanoseconds and picoseconds are increasingly common. This exponential shrinkage in processing times at the table's bottom is reflected in another way at the top of the chart by exponential growth in storage, bandwidth (capacity), and speed. For example, portable storage devices have gone from floppy disks that held 360 kilobytes (kB) to 1.2 megabyte floppies, on up to 8.5 gigabyte (GB) for two-layer double-sided DVDs or 17 GB for four-layer double-sided DVDs. This is a difference of 23,611 times, or a spectacular annualized growth rate.

Table 3-1: SI common metric prefixes
Multiplication Factor Scientific Notation Prefix Symbol and Capitalization US Name
1,000,000,000,000,000,000 1018 exa E quintillion
1,000,000,000,000,000 1015 peta P quadrillion
1,000,000,000,000 1012 tera T trillion
1,000,000,000 109 giga G billion
1,000,000 106 mega M million
1,000 103 kilo k thousand
100 102 hecto h hundred
10 101 deka da ten
.1 10-1 deci d tenth
.01 10-2 centi c hundredth
.001 10-3 milli m thousandth
.000001 10-6 micro m millionth
.000000001 10-9 nano n billionth
.000000000001 10-12 pico p trillionth
.000000000000001 10-15 femto f quadrillionth
.000000000000000001 10-18 atto a quintillionth

[Sources: GSA, FED-STD-1037C, 1996, pp. I-12, 13; Chicago Manual of Style, 1982, p. 393, The Random House Encyclopedia, p. 1449]

Two of the seven SI base units, seconds (s) and amperes (A), are central to electronic hypercommunications. They are joined by several SI derived units given in Table 3-2.

Table 3-2: Electrical and other SI derived units
Item Unit name Unit symbol Expression in other SI units
Frequency hertz Hz s-1
Electric capacitance farad F C/V
Electric charge, quantity of electricity coulomb C A·s
Electric conductance siemens S A/V
Electric inductance henry H Wb/A
Electric potential, potential difference, electromotive force volt V W/A
Electric resistance ohm W V/A
Power, radiant flux watt W J/s

Entries in the table are a few essential electric and electronic uses of SI units that are derived from combinations of other SI units. The most important unit in Table 3-2 is frequency.

Other than SI units, the most important units in hypercommunication are the digital units of bit (b) and byte (B). A bit is a single digit within a byte. A given byte theoretically may be of any length of bits. An older term, baud (Bd) was a measure of transmission over analog telephone lines. The baud rate measures the number of times a communication line changes per second based upon an encoding scheme for the French telegraph system developed by Baudot in 1877 [Sheldon, 1998, p. 93; Bezar, 1996, p. 317]. When bits are encoded at line speed, unibit encoding is said to exist: two bits encoded per line speed is dibit encoding; three bits encoded is tribit encoding. Encoding is the changing of code as seen by user into code as transmitted over conduit such as copper wires.

3.2.1 Digitization and Digital

Bits are also known as binary digits expressed within a septet (byte with seven bits) or octet (byte with eight bits). The bit is the fundamental unit of digital communications and the source of the term digital. While a more complete explanation is found in 4.2.1 (as a preface to explaining what hypercommunications are), a simple definition of digital is data or information created (source domain) or transmitted (signal domain) in bits.

When the term digitization is used, it refers to converting analog information into bits in either the source domain or the signal domain. An example of the conversion of an analog source (such as a photograph) into a digital source would be the scanning of a photograph's continuously varying features into a digital file. The greater the need to precisely match the original hues and tones of the photograph, the greater the number of bits needed. Turning from the source domain to the signal domain is simple. The signal domain merely refers to whether the signal transmitted is analog (such as continuously varying sound waves) or digital (characterized by discrete pulses rather than waves).

There are several reasons digitization is important in both the source and signal domains. Often, a failure to understand the difference between analog and digital stems from confusing signal and source.

Digital sources are easily manipulated, can carry a greater volume of information in a smaller surface, at a lower per unit production cost. This is clearly seen when phonograph records are compared to CDs or DVDs. There was an evolution of analog media from primitive magnetic recording disks to Edison's single-sided phonograph technology and then into two-sided analog records. Each new analog medium was capable of storing more information. However, with the advent of digital forms such as CD and DVD, not only could more information be carried, it was more easily manipulated. An exponential growth in storage and transmission capacity, along with many media choices (optical or magnetic storage, for example) are available with digitized sources. A single DVD can carry 4.7 to 8.5 GB of data or up to fourteen times what a CD can [Sheldon, 1998, p.919]. CD's, in turn, carry many more times the information that a 33 RPM double sided stereo album. Even more important perhaps is the ability for many independent kinds of devices to use digital sources, the added simplicity of changing or reproducing bits, along with the ability of computers to process digital sources.

However, in the signal domain, digitization makes accuracy and the absence of errors in transmission more important than ever.

Analog transmission is sufficient for most voice transmissions, because a small inaccuracy in the received signal will not be detected by a listener. But accurate transmission is absolutely essential to data transmission, where a single changed bit could completely destroy the meaning of the original signal. [Nortel, Telephony 101, p. 61]

The source of confusion about bandwidth (covered in more detail in 4.2) lies partly in the origin of communications in radio, television, and telephone where both the domain and signal were analog. Originally, the telephone and telegraph were the only methods of communication beyond voice, the written word, the megaphone, and the proverbial pair of tin cans joined by string. Bandwidth was a topic that interested only communications engineers who saw numerous efficiencies could be obtained by transmitting digital signals over long distances, even if conversion from and reconversion to analog form was required so analog devices could communicate. At that time, bandwidth had to do with the difference in hertz (Hz) or cycles per second between the highest and lowest audible frequencies of human speech carried by a circuit. Bandwidth was measured quality or technical characteristics and was not equated with speed.

Over time, digital signal transmission developed although digitization of sources was still uncommon. Alec Reeves of ITT (International Telephone and Telegraph) developed a theory by which voice signals could be carried digitally in 1937 called PCM (Pulse Code Modulation). Development of transistors in 1947 at Bell Labs and integrated circuits at TI (Texas Instruments) and Fairchild permitted testing of PCM to begin commercially in 1956. In 1962, Bell of Ohio deployed PCM to carry traffic, and by the late 1960's, even some rural telephone co-ops began to use PCM for interoffice trunk (transport level) circuits. One reason digital transmission is important to rural subscribers is that PCM replaced analog circuits that amplified noise and distortion because analog signal noise is amplified with increasing distance. Instead of having to shout through static and crosstalk, telephone calls carried by PCM over even greater distances had substantial quality increases while carrier unit costs fell [REA, 1751H-403, 1.1-1.5].

The advent of computers made it necessary for digitization within the source domain. However, the benefits of combining the computer with communications were less evident forty years ago. While it was economical to convert parts of the telephone network (within the telephone company's transport level) to digital transmission, there was no need to do so for the local loop (the wire from telephone subscribers to the telephone company). Consequently, when advances in computer technology grew faster than local loop technology, a vast amount of information from digital sources had to be converted into analog form to travel over the telephone network for data communications to occur. The reliance on digitization of source and signal is an important way the hypercommunications model differs from the interpersonal and mass models of communication. Practical issues in signal and source conversion are returned to in 4.2.

3.2.2 Conceptions of Networks

Within the economics literature, there is considerable debate about what networks represent. For this reason, four conceptions of networks give a broad overview of what networks are before presenting the details of the engineering, communication, and economic perspectives later in Chapter 3. The four conceptions are: networks as fuzzy paradigms, macro networks, micro networks, and the debate in economics concerning network externalities.

The first conception of networks (and the conception in common use) is a fuzzy concept representing almost everything a business or a person does. Used in jargon in this first way, a network may be a noun, verb, or adjective describing a business paradigm, a physical communications network, a human community, or human interaction. Therefore, one problem facing economists who study networks is how to define them because the fuzzy conception is so broad that it has no special economic significance. For example, according to the Nortel Corporation in 1996:

Behind the buzzwords is a fundamental shift in the once isolated, and now converging, worlds of computing, networking, and communications technology. Having supported business from the back-office, computers and communications technology are now part of the fabric of a new internetworked economy. They directly affect the business' ability to operate, compete, and reach full potential. The network is more than a computer or a call center; the network is the business. [Nortel, 1996, p. 3]

The second and third conceptions of network depend on whether the term is used in a more general context (though not nearly as general as in the first example), or in a more specific context such as hypercommunication network. One approach to the general versus specific in economics is described by Economides (1996) who explains the difference between macro and micro networks:

There are two approaches and two strands of the literature in the analysis of network externalities. The first approach assumes that network externalities exist, and attempts to model their consequences. I call this the 'macro' approach. Conceptually, this approach is easier, and it has produced strong results. It was the predominant approach during the 80s. The second approach attempts to find the root cause of the network externalities. I call this the 'micro' approach. In Industrial Organization, it started with the analysis of mix-and-match models and has evolved to the analysis of price dispersion models. The 'micro' approach is harder, and in many ways more constrained, as it has to rely on the underlying microstructure. However, the 'micro' approach has a very significant benefit in defining the market structure." [Economides, 1996, p. 8]

A macro network does not depend on the structure or makeup of the underlying industry, while the micro approach does. There are microeconomic and macroeconomic aspects to discussions of both micro and macro networks. With this in mind, a macro network may be defined as one of two general kinds:

either a physically interconnected ubiquitous distribution system or an integrated system of switches or nodes and routes or channels with usage restrictions and enforceable interconnection agreements. The local distribution arrangement of electric, gas, and water utilities are generally thought to be the classic examples of the first definition, whereas the public-switched telecommunication network, Peter Huber's vision of a geodesic telecommunications network, and various intermodal transportation systems are examples of the second. [Lawton, 1997, p. 137]

A macro network can be a physically integrated distribution system or an integrated system of switches, nodes, and routes. A micro network can be take either of these two forms, but also depends on the market structure, network architecture, physical and logical topology, and underlying good or service the network helps to move.

Indeed, in another era, Jenny defined a system as "any complex or organic unit of functionally related units" [Jenny, 1960, p. 165] so that yesterday's system is today's macro network. A system (like a network) can be anything from a person to a political party to the entire world economy or the solar system, depending on the point-of-view of the analyst. Sub-systems form successive levels in a layer-cake of systems just as sub-networks make up levels of a network of networks (or internetwork).

Hypercommunications relies on a particular micro network or "a ubiquitous and economically efficient set of switched communications flows" [Lawton, 1997, p. 137]. Crawford (1997) notes the importance of source and signal domain in networks when he states that:

for an analysis of the incidence of transmission costs for senders and receivers of information, it is best to consider allocation of both bandwidth and rights to information. This formulation is called a market for communication. [Crawford, 1997, p. 399]

The hypercommunication network is a communication network. The economics of this network are based on two factors: form and function. First, the function of the hypercommunications network is to transport bytes of voices and data. Thus, some of the economic constraints will be rooted in technical micro network engineering and design aspects of the telephone and computer networks that are converging into a single hypercommunication network. Secondly, the hypercommunication network fosters interpersonal and mass communication across a variety of message types or forms. Thus, some of the economic constraints will be rooted in the form of the network as judged by its contents. The result is a mix between communications form and engineering function.

In common use, the word architecture might appear to describe the physical form or engineering structure of a network. In the special case of communications networks, architecture has even been used to describe the economics of communication function (message content and nature of service provided) [MacKie-Mason, Shenker, and Varian, 1996]. In network engineering, the term architecture covers the relationship among all network elements (hardware, software, protocols, etc.) while the word topology connotes a physical (or logical) arrangement. Therefore, when architecture is used, it conveys a broader meaning than topology alone to include the network's uses, size, device relationships, physical arrangement, and connections.

A fourth network-related concept is when to adopt terms such as "network externality", "network effects", and "network synergies" to model the economics of networks. One reason there has been confusion in network economics has to do with use of the term externality as a catchall word to describe all synergies and effects of networks in the economics literature.

Externalities are an important reason network economics has become a specialized field within economics. Externalities describe both external economies and external diseconomies as Samuelson points out in 1976:

By definition, such externalities involve good and bad economic effects upon others resulting from one's own behavior. Since in the search for individual gain and well-being one person takes into account only private money costs and benefits as seen by him, there will then be a divergence between social costs and pecuniary-private costs. [Samuelson, 1976, p. 479, italics his]

Importantly, externalities can also refer to how the participation of others will affect us as well.

However, network economics is not merely the study of network externalities. Liebowitz and Margolis are critical of what they call the "careless use" of the term externality in the network economics literature. They prefer the broader term network effects because "Network effects should not properly be called network externalities unless the participants in the market fail to internalize these effects" [Liebowitz and Margolis, 1998, p. 1]. The term synergy is also in wide use because of its interactive connotation.

Network economics examines how network effects arise in markets and the influence of those effects on supply, demand, and welfare. However, familiarity with the form and function of the hypercommunication network is a necessary foundation to the application of the economics literature to the hypercommunication problem. The discussion will return to a review of network economics in section 3.7 after more is understood about the hypercommunication network.

Finally, return to an ongoing theme about the sometimes opposite philosophies of the engineer and the economist. Engineers include relationships among the physical parts of any general network, (points and the lines that connect the points) when they consider a network's functional architecture. The actions of human users on any end of a hypercommunication network may be trivially considered in engineering design, but network engineering fundamentals are composed of hardware and software, not people and communication. Furthermore, the engineering of a network depends on a changing state of technical knowledge. Therefore, the architecture of the hypercommunication network is the product of technological change combined with how various micro networks interact with human communicators who behave under economic constraints. The engineering view of a network may end with the terminal or node, but the communications and economics views consider the sender and receiver as network elements too.

Finally, conceptions of networks are rooted in concrete examples. Hypercommunication network architectures are hybrids of several working scientific conceptions. First, there are the technical characteristics of the PSTN (Public Switched Telephone Network) that came from the engineers of Bell Labs (now Lucent Technologies) and the baby Bell lab (Bellcore, now Telcordia) which are covered in section 3.3. Next, some technical characteristics of computer networks are covered in 3.4 as they were developed by computer scientists and electrical engineers. These characteristics led to the evolution of six economic generations of computer networks in 3.5, as developed by professionals from the data communications field. Finally, the multidisciplinary operations research networking literature (3.6) has combined all three kinds of networks into the technical foundation of hypercommunication networking. These four sections acquaint the reader with some elementary technical properties of the hypercommunications network. Once these properties have been established, the network economics literature can be applied to the resulting micro network of hypercommunications in section 3.7.

3.3 Technical Characteristics of the PSTN

Work in telephone engineering pioneered communication network architecture. This discussion will cover elementary telephone network fundamentals focusing on a simplistic view of the PSTN (Public Switched Telephone Network) as it traditionally worked to provide POTS (Plain Old Telephone Service). Today's urban PSTN (which provides enhanced services far beyond traditional POTS) is more detailed than the model presented here primarily because telephone networks have adopted computer network designs and operations research algorithms. Importantly, in certain rural areas of the U.S., the POTS PSTN is still the only network. A more detailed discussion of how the PSTN works is found in 4.3.2 (telephone infrastructure) and in 4.6 (the traditional telephony market) and 4.7 (the enhanced telecommunications market).

Two main aspects of the POTS PSTN will be considered here. First, the basic engineering elements and levels are identified and defined. Second, three groups of technical problems governing the reliability, QOS (Quality of Service), and operational costs and benefits are considered.

To understand the fundamentals of the telephone network, it is first necessary to understand its place in a more general telephone connecting system. Figure 3-1 [Adapted from Beneš, 1965] shows the gross structure of the connecting system: local and remote terminals (telephones), the connecting network (a hierarchical layer of switches and transport equipment), and a control unit. As depicted here, the connecting network is a single entity, though over time it has become an efficiency hierarchy of sub-networks.

Figure 3-1: Gross structure of the telephone connecting system

The telephone connecting system is "a physical communication system consisting of a set of terminals, control units which process requests for connection, and a connecting network through which the connections are effected" [Beneš, 1965, p. 7]. Technically, the network and the system are two different entities because two terminals (telephone stations) can be connected without a network. The control units provide the intelligence necessary to allow many telephones to be efficiently connected to what would otherwise be a dumb network.

The separation between a terminal and the network is behind the advantage of having a hierarchical network within a connecting system: dramatic efficiency. Suppose there were n terminals that needed to be connected together. Without a network, connections would be required between the n terminals to allow a total of n x (n-1) possible calls. When the telephone system was first invented, this is exactly what was done. According to Oslin, "Early telephones were leased in pairs; there were no exchanges" [Oslin, 1992, p. 221]. This historical fact introduces a fundamental property of hypercommunication networks: direct physical connection between every pair of terminals is neither necessary nor efficient for communication between all users to occur. A telephone network has the property that all n terminals can be connected for a cost dramatically below the cost of stringing sets of wires between all pairs of telephones.

The reduction in connections is accomplished two ways, first through line consolidation and second through routing hierarchies. Line consolidation simply means the "servicing of many phones by fewer telephone lines or circuits" [Bezar, 1995, p. 46]. The number of telephones that are connected to the network (terminals) exceeds the number of connections from those terminals to the connecting network. For example, a farm may have five telephone extensions on one telephone line (access line) connected to the connecting network. The number of terminals connected to the PSTN exceeds the number of access lines because of line consolidation.

Routing hierarchies provide even greater reductions in the total number of connections. In the telephone network hierarchy, a local central office (serving a particular set of telephone exchanges) is connected to a local interexchange network or to a toll network. Rather than string a pair of wires between every pair of Florida's 8,025,917 access lines to yield 32,207,667,832,486 paired connections, the market has established over 1200 telephone exchanges (nodes) through which any single telephone subscriber may reach any other subscriber. Furthermore, additional line consolidation takes place for every level of the network.

Four chief activities take place within the PSTN hierarchy: access, signaling, transport, and switching as depicted in Figure 3-2. Assume that a call originates from the bottom of Figure 3-2 at a telephone (perhaps in Key West) to a remote telephone (in Pensacola), at the top of the figure.

Figure 3-2: PSTN routing hierarchy

The telephone set (or terminal) may connect locally in two ways: lineside or trunkside. Lineside telephones are directly wired via a single access line to the local central office (as on the lower left-hand side). Trunkside telephones are first connected to a PBX (Private Branch Exchange or private telephone network typically located in a business) and then connected via a trunk line (a multiple access line) to the local exchange. The first level of a call occurs at the local access level where a dialtone signal is heard and numbers are dialed. The local central office switches and transports the call to a local interexchange (perhaps in Miami). The call then travels to a remote interexchange (possibly Tallahassee). Finally, the call reaches the Pensacola exchange's central office and then a telephone connected to it. The trip the call takes from the Key West exchange to the Pensacola exchange occurs at the transport level because it passes through the multi-level connecting network. The route of the call within the local and remote access levels does not depend on the entire network. Signaling is used at the local access level to establish the call, helps route the call through the transport level, and causes the remote telephone to ring. As telephone technology progressed, more and more calls were converted from analog to digital format over the transport level and then re-converted to analog format at the remote access level. Furthermore, many models of telephones became available (from numerous manufacturers) and the system was engineered so they could each connect into the system.

Such a network hierarchy has costs and revenues at each level of operation. At the access level, the cost of building and maintaining wires to and from each telephone in the local area served by each central office is incurred by the ILEC (Incumbent Local Exchange Carrier). Typically, a fixed monthly local access charge per access line (regardless of usage level) is assessed to cover access level costs because they tend to be highly fixed. The cost of transport lines and out-of-area switching must be covered by transport level charges. These charges have a fixed component, but vary primarily by distance and time. Finally, there is a cost to receiving calls from out-of-area which is compensated in the form of termination charges on a per call basis.

The connecting network is an "arrangement of switches and transmission links allowing a certain set of terminals to be connected together in various combinations, usually by disjoint chains" (paths) [Beneš, 1965, p. 53]. Two distinct activities are performed: nodes switch and links transport. Figure 3-3 depicts a stylized telephone connecting network. Stripped of the individual access lines, trunks, and telephones of Figure 3-2, Figure 3-3 serves a simplistic representation of the PSTN POTS network. Figure 3-3 shows only transport-level elements once the call has been received by a central office [Beneš, 1965].

Figure 3-3: Simultaneous representation of structure and condition in a telephone network at any level of the switching-transport level hierarchy

Points A, B, C, and D are CO nodes (switches), representing a point where a large set of terminals (telephones) access the PSTN together. AB, AC, AD, BD, and CD are generally called branches of the network. AB and BC (or AD and DC) are instead called links if for some reason a call cannot travel directly through branch AC. The numbers above each line refer to the condition (or traffic load) of the network. In Figure 3-3, zero represents an open crosspoint and one represents a closed crosspoint. Thus, at any instant, a telephone network's structure and the traffic load combine to allow a fixed number of available paths (branches and/or links) for a call to travel. The number of paths available depends on total network capacity, paths occupied by ongoing calls (combinatorial complexity), and the probability that ongoing calls will hang up (or new calls will occur) at any moment (randomness). For example, assuming no new calls other than those shown in Figure 3-3, a call from A to C could be completed directly using branch AC, or by using the path ABDC (combination of links AB, BD, and DC).

In a more general telephone network, some specialized nodes may be for transport only (used only to connect other nodes together and not connected to terminals), or may be spots where transport connections arise or are terminated (inlets or outlets). Network engineering becomes immediately more difficult as the number of inlets, outlets, specialized nodes, links and branches expands. As networks grow in size, controlling units must be able to calculate all open links and branches, all free nodes, and all unbusy inlets and outlets.

The purpose of the controlling unit introduces a second point about telephone network engineering. The telephone system has three important properties: combinatorial complexity, definite geometrical (or other) structure, and randomness of many of the events that happen in the system. These properties are particularly important because a telephone call (unlike a telegraph message) requires a continuous open circuit from origin to destination for communication to occur. Therefore, underlying the network's design are mathematical algorithms regarding three kinds of representative problems: combinatorial, probabilistic, and variational. Each problem results in specific costs to each level of the network.

The combinatorial problem concerns the "packing problem" or set of possible paths a call may take given the set of available paths, nodes, inlets, and outlets. A finite number of possible paths exist to take a call through the transport level from the Key West CO to the Pensacola CO. Not all of these are necessarily the "best" path for telephone network efficiency. Additionally, each area's access level has less transport capacity than the number of access lines. For instance, Key West subscribers are limited by the number of total outgoing transport lines, and Pensacola subscribers are limited by the number of incoming transport lines. Line consolidation and routing hierarchies occur throughout the connecting system, inside and outside the connecting network.

An important network design feature that establishes constraints for the packing problem concerns how line consolidation occurs in the network. One kind of switching network uses space division switching (where each conversation has a separate wire path) while another kind of switching network uses time division switching (where each conversation has a separate time slot on a shared wire path). Similarly, the transport networks are of two kinds. Space division switching's counterpart in transmission is FDM (Frequency Division Multiplexing) where each conversation occupies a different frequency slot. Time division switching's transmission counterpart is TDM (Time Division Multiplexing) [Hill Associates, 1998, p. 5.6]. By converting voice conversations into digital form, more conversations can be packed into a given switch using time division switching while similarly more conversations can be transported digitally using TDM.

The core probabilistic problem concerns traffic circulation (probabilities that requests for service, dialing, call completion, or hang-up will occur from terminals under instantaneously changing conditions). The most probable behavior of all users before, during, and immediately after a call has to be calculated for all possible links between nodes. Estimates of circuit availability within transport and access levels are obtained by using probability distributions of typical statistical traffic patterns given average and peak calling. Those estimates, in turn, are used as input for variational problems. Conversions from analog to digital and back again also use probability distributions and sampling to calibrate equipment for the optimal balance of quality and efficiency.

The core variational problem concerns optimal routing through the network, given the solution sets to the combinatorial and probabilistic problems. For example, a Key West to Pensacola call could possibly be routed through a variety of intermediate hops to reach International Falls, Minnesota, and then go to Tallahassee (instead of Orlando and then Tallahassee). However, because of the distance sensitivity and fixed cost of hopping the call from one interexchange node to another, that route would not be the cheapest. Variational problem parameters could exclude some of these possible routes while leaving others available (perhaps only at peak times) to minimize transport costs while simultaneously maximizing call capacity.

Beneš contrasted the connecting network and the controlling unit of the system:

The connecting network, in contrast to the control unit, determines which calls can be in progress, rather than how fast they can be put up. Its configuration determines what combinations of terminals can be connected simultaneously together. [Beneš, 1965, p. 15]

The network design determines the sets of combinatorial, probabilistic, and variational problems that the controlling unit will attempt to optimize with decision rules.

The network fundamentals presented so far introduce the third and final point about the traditional telephone network: economic issues. As the PSTN evolved, the economics of the network began to depend more on the historic structure and regulation of the telephone business than it did on the engineering of the network. Vogelsang and Woroch call the current setting "a complex dance of technology, regulation, and competition" [Vogelsang and Woroch, 1998, p. 1]. Historically, it is argued that the telephone network architecture created inescapable economic constraints so that the telephone system was a natural monopoly.

Two interrelated issues are central to analyzing whether fundamental telephone network technology caused a particular market structure or vice versa. One issue is whether a system orientation or a network orientation is optimal. Another issue concerns whether telephone architecture inherently creates powerful economies of system, scope, and scale defining the traditional POTS PSTN as a natural monopoly. An important underlying concern is the impact of constantly changing technology on the equation. The issue of whether the telephone network is (or ever was) a natural monopoly is returned to in 5.4.

A few details of PSTN technical efficiency summarize this section. First, telephone networks are end-to-end open circuits. Once established, a wireline telephone call takes a dedicated circuit-switched path from caller to recipient. The call is subject to congestion only upon set up. Second, the engineering philosophy behind the traditional telephone network evolved from emphasizing a unified single system into a hierarchy of networks. Access lines that link both the calling and answering party to the network are known as the "local loop" because they connect individual users to the edge of the network at CO nodes. Third, calls are switched through nodes and transported over links that take advantage of line consolidation, digitization, routing hierarchies, and other engineering efforts to maximize call volume while minimizing costs.

3.4 Technical Characteristics of Computer Networks

A second influence on hypercommunication network design comes from work in computer network engineering and design. Computer engineers design networks based on the uses, components, scales, communications distance, network architectures (logical and physical relationships among elements), topologies (logical and physical arrangements), speed, and reliability of data networks. Not surprisingly, the broad and fast-changing field of computer networking makes a technical summary of the economic underpinnings of networking technologies an impossible task.

Nonetheless, this section will attempt that task. The specialized economics of hypercommunication networks differ in important ways from the broader field of network economics. To see this point (and to understand that some economists have misapplied macro network models to the hypercommunication case) some technical fundamentals of computer networking are needed. In this way, some idea of the economic benefits and costs (especially positive and negative externalities) of computer networks can be seen. In spite of anecdotal evidence to the contrary, computer networks are a production technology with limits and constraints. Due to the fast-changing and complicated nature of data communication, it is harder to grasp the economic ramifications of computer network technology.

Several technical characteristics of computer networks will be made in section 3.4 before the introduction of six economic generations of computer networks in section 3.5. Along with a comparison to the POTS network, four components of computer networks are outlined in 3.4.1. Common uses for data communication are considered and different technical services performed by computer networks are the topics of 3.4.2. Four technical objectives (each with important economic ramifications stemming from underlying combinatorial, probabilistic, and variational constraints) that face computer network engineers are covered in 3.4.3. Finally, in 3.4.4, the hierarchical seven-layer OSI protocol stack (representing sub-tasks of network communication sub-tasks) is sketched to unify the introductory points under one model.

3.4.1 Four Components Distinguish Computer Networks from the PSTN

Due to its complexity, the computer terminal has become a part of the network itself, in contrast to the traditional telephone. Today, individual PC's have more processing power, speed, storage, and memory than the largest computer of thirty years ago. In contrast, the telephone has not kept up with the computer's evolving complexity and it remains a simple device. However, the telephone transport network is becoming increasingly similar to an advanced computer network because it is also composed of computers. Another fundamental difference between the computer and telephone network is that while all telephones function relatively equivalently, not all computers do. The many brands and models of computers are differentiated by changing technologies such as memory, processing speed, and operating system. However, with each new research breakthrough in computer technology, the task of data communication becomes more complicated because new protocols are needed to enable different kinds of computers to communicate. In addition to dramatic technological changes in the terminals at the end of a computer network, the other important difference with the POTS lies in switching. The telephone transport network has traditionally relied on physical circuit switching while computer networks rely on packet switching and virtual circuit switching.

Four components (hardware, software, protocols, and conduit) distinguish increasingly technologically advanced generations of computer networks from their telephone counterparts. Each distinguishing component has itself evolved into a complex structure as computers became increasingly sophisticated compared to the telephone. Hardware includes computers and peripheral equipment that collect, analyze, print, display, store, forward, and communicate data. While most hardware devices are outside the scope of this work, some hardware such as modems (4.2.2), enhanced telecommunications CPE (4.7.1), and private data network CPE (4.8.1) are covered later.

Software represents the operating systems, programs, and applications that ensure hardware will function alone and in concert with the network. A particular set of hardware and software configured to operate together is known as a platform. Protocols may be thought of as standards or rules governing hardware-software interaction. More precisely, protocols are a formal set of conventions that govern format and control of data communication interaction among differing platforms and hardware devices [GSA Federal Standard 1037C, 1996, p. P-25]. Software is typically proprietary and licensed for use by the software developing company. Protocols are typically non-proprietary but are used to constrain software development within standards governing the many tasks of data communication and computer operation. Conduit represents the transmission media that tie hardware to the network. Conduit includes guided media, known also as wireline conduit (such as wire and cable) along with unguided media, known also as wireless conduit (such as radio and microwave). The choices of conduit available to rural areas are often limited due to physical limitations in transmission distance, weather, and electromagnetic factors.

3.4.2 Computer Network Uses and Service Primitives

Computer networks may be classified according to their uses and service primitives. Common uses include: electronic messaging (e-mail), sharing resources (CPU, printers, local conduit, and long conduit, databases, files, applications), and the transfer, reduction and analysis of information (file transfer, automated reporting and controls). Businesses use networks for collaborative BackOffice functions (manufacturing, transportation, inventory, accounting, payroll, and administration) and for customized FrontOffice uses (sales, ordering, marketing, and direct interaction with customers and stakeholders). In production agriculture, networks of remote sensing equipment may be used to monitor field conditions and even to operate irrigation equipment. Agribusinesses use computers to monitor prices, weather, livestock herd statistics, records about individual trees, and a host of other variables.

The development of the Internet has fostered uses such as real-time interaction (broadcast text, chat, voice, video) and multimedia to support online entertainment, education, and shopping. From an economic standpoint, a particular computer network may be application-blind or application-aware depending on whether the network is designed according to its use or the use designed according to the network [MacKie-Mason, Shenker, and Varian, 1996, p. 2]. IBM coined the term e-business to describe a new business model that depends heavily on using the latest generations of computer networks. See 4.9.9 for further discussion of e-agribusiness and e-commerce.

From a technical or engineering standpoint, the application-aware uses are based on application-blind service primitives. The type of services (service primitives) that is available in a network is another important technical feature. The distinction among services has become especially important to recent network optimization. Primarily, the focus is on the difference between connection-oriented and connectionless services. Protocols supporting different kinds of services (techniques of communication) have evolved in the data link, network and transport layers which treat data differently as it flows through the protocol stack and in conduit. These differences are important to summarize because each can create different economic repercussions, especially with regard to pricing.

Each service type uses a different combination of protocols, stack layers, data units, and network optimization decision rules. Depending on the service type, combinatorial, probabilistic, and variational constraints in each of the four technical network objectives may tighten or relax. Furthermore, larger networks carry a blend of traffic from several service types, complicating network engineering. The service types are shown in tree form in Figure 3-4 [Jain, 1999, p. 1B-13].

Figure 3-4: Types of services or techniques used to move data from one destination to another through a computer network

A connection-oriented protocol sets up a logical end-to-end path (virtual circuit) to the remote host through the network before streaming data is sent. Some setup time is needed to establish the connection through the entire network. Congestion in the network can prevent the establishment of a path. Data are sent in packets (segments) that do not need to carry overhead (extra address bit information) through a virtual circuit from sender to receiver, allowing more data per packet than with a connectionless service.

Virtual circuits differ from telephone network circuits (voice pipes) because they are logical paths through packet switched networks rather than physical paths through circuit switched networks. Hence, multiple virtual circuits are able to share a single physical path. Virtual circuits can be permanent PVCs or switched SVCs. A SVC is a temporary logical path through the packet switched network while a PVC is a permanent logical path. ATM and frame relay are connection-oriented services used to connect many WANs, distributed networks, and inter-networks. The Internet's TCP protocol is a connection-oriented protocol as well. These topics are covered in more detail in 4.8 and 4.9.

Connectionless (datagram) traffic does not need to set up a path to the remote host in order to transmit bursty data. Instead, the sender creates packets that contain both the data to be sent and the address of the recipient. Hence, in relation to connection-oriented packets, connectionless packets contain "overhead" because the address data crowd out data that would otherwise have been transmitted in the same unit. The connectionless orientation frees network paths so everyone may continuously use them because each packet is considered independent of those before or after. The network sends each packet in a series of hops from one routing point (intermediate node) to another and on to the final destination based on network layer routing protocols.

Connection-oriented traffic is suited for real-time applications while connectionless is not. However, a more intelligent network is required to transmit connection-oriented traffic. Within both connection-oriented and connectionless services, reliability further distinguishes service traffic. Reliable packets are those that will be automatically retransmitted if lost. Typically, reliable connection-oriented traffic uses sequence numbers to prevent out-of-order or duplicated packets, but a byte stream method also can ensure reliability. Reliable connectionless traffic resorts to return transmission of a positive acknowledgement and lacking that retransmits missing packets.

3.4.3 Technical Network Objectives

Engineers who design computer communication networks face four dynamic and simultaneous technical objectives. The core engineering problems are to simultaneously and dynamically map each technical objective onto the four network components (hardware, software, protocol, and conduit) at the sending, intermediate, and receiving ends of computer communication. Each technical objective has the familiar set of combinatorial, variational, and probabilistic constraints introduced in the discussion of the telephone network.

The first technical objective is sending computer rate control. The sending computer tries to maximize the data rate (in bits per second) of the data it sends. The sending data rate itself is a function of the conduit's bandwidth (capacity), the conduit's signal-noise ratio, and the encoding process. Therefore, both the sending computer and the network set parameters on the sending computer's data rate using control parameters based on their own combinatorial, probabilistic, and variational constraints and on conduit capacity.

The second technical objective is signal modulation rate maximization. Conduit design and coding schemes (based on known physical laws such as the Nyquist Theorem and Shannon's Law) are used to maximize the rate at which the conduit transmits the data signal. Bits of data must be converted into pulses to travel over conduit. However, conduit can carry a limited amount of data sent as a signal at a maximum modulation rate (baud rate). To transmit data as high and low voltage electric pulses that can be carried by the conduit, bits are encoded by the sending computer into a signal sent at particular number of cycles per second or baud. More precisely, the sending computer (or other hardware) converts data or text frames into bits and then into signals (electrical pulses sent at a certain number of cycles per second).

The capacity of a particular type of conduit, its signal-noise ratio, and the distance it can carry a signal are within the domain of electrical engineering. Constraints on speed such as attenuation, capacitance, delay distortion, and noise depend on the length, shielding, and type of conduit. The physical electromagnetic limitations of the conduit have changed with new wiring (and wireless) technologies so that they are less restrictive than they once were. However, even today, this second technical objective is often the most binding constraint of the network communications problem, especially in rural areas.

The third technical objective (actually a group of objectives) is network optimization. Except in the direct point-to-point case (where a sender is linked to a receiver over one uninterrupted link), computer communication requires intermediate computers, hardware, and conduit paths and links between the two computers that are shared with other users. Network optimization is the simultaneous balance of two primary performance objectives: minimization of delay and maximization of throughput rate. This balance is achieved at an overall network and an individual connection (or message) perspective. Every network path (data pipeline) between two computers (along with intermediate hardware devices) is subject to delay and throughput constraints. On an intuitive level, Comer and Droms (1999) suggest that delay depends on the data pipeline's length, while throughput depends on the pipeline's width. Both depend on the number of intermediate nodes or hops.

Delay, the time it takes a bit to cross the network, is measured in seconds or microseconds (ms). Not counting operator delay (delay due to human behavior), delay may be of three types: propagation, switching, and queuing. Propagation and switching delays are fixed (do not depend on the level of use), while queuing delay is related to throughput. When the network is idle queuing delay is zero. However, queuing delay rises as the network load (ratio of throughput to capacity) rises.

Throughput, a system capacity constraint, often popularly synonymous with bandwidth, is measured in bits per second (bps). The physical carrying capacity of conduit establishes a ceiling rate, an overall throughput that cannot be exceeded even under the best circumstances. Effective throughput recognizes there are physical capacities to intermediate hardware devices (hubs, routers, switches, and gateways) which produce a second, lower ceiling for best case transmissions between two particular points. The effective throughput actually achieved in a particular transmission depends on the physical layout of the network, data coding (data rate) and network rate control algorithms in addition to physical link protocols and conduit capacity.

Utilization may be thought of as the product of delay and throughput, or the total amount of data in transit. A high utilization is known as congestion, while a node with a high queuing delay is known as a bottleneck. Network optimization tries to lower fixed delay and increase capacity through design. Given a particular design, network optimization uses network rate control (traffic shaping) to monitor incoming traffic, while dropping or rejecting packets that exceed effective throughput. Traffic shaping's goal is use estimates of combinatorial, probabilistic, and variational properties in utilization patterns to maximize efficiency (the number of successful messages) and speed, while minimizing costs and avoiding congestion and bottlenecks. Additionally, network optimization (through simultaneous interaction of design and traffic shaping) seeks to lower utilization to avoid global congestion and local congestion in bottlenecks.

In computer network optimization, routing, packing, and the range of combinatorial, probabilistic, and variational problems similar to the telephone network are encountered. Intermediate hardware devices are used to enforce solutions to network optimization. Furthermore, computer network design protocols enable intermediate devices and computers to communicate across platforms, software, and continents. Hence, in addition to the optimizing the physical or logical topology of the network, computer network engineers are concerned with the compatible interaction of hardware and software.

The fourth technical objective is flow control or the receiving computer's need to prevent the incoming pulses from overwhelming its ability to decode those pulses into bits. This can be because the sending computer's send rate exceeds the receiving computer's receive rate or because the sending application is faster than the receiving application. The receiving computer's objective is to avoid becoming overwhelmed by too fast or large a flow. In addition to such flow control activities, the receiving computer may need to acknowledge receipt or request error correction.

These four objectives are more difficult to achieve because there are many manufacturers of hardware and conduit. Additionally, software companies are typically not in the hardware or conduit business. Therefore, except in their earliest history, computer networks have not been designed as part of a uniform, centrally controlled universal system such as the original PSTN Bell system. Much original research on computer networks did come from institutions such as AT&T's Bell Labs, the U.S. Defense Department, IBM, Hewlett-Packard, and Intel where the general systems approach was emphasized. However, as networks and computing power grew, data communication evolved from a centrally planned systems approach into a sub-task approach, based on an innovative marketplace comprised of a mix of small and large vendors.

3.4.4 OSI Model of Hierarchical Networking Sub-tasks

Another important technical characteristic of computer network design is the OSI (Open Systems Interconnection) model, around which standards and protocols have been developed to foster compatible data communications among software and hardware products offered by competing vendors. To reduce the inherent complexity of studying data communication across a network, the United Nations International Standards Organization (ISO) created seven layers of data communication (each representing a distinct sub-task), known as the OSI reference model. Scientists, vendors, and users formed committees to propose and establish protocols and standards for each layer. For each layer, more than one standard exists because different networking needs could best be achieved by an "open system" rather than a closed proprietary system.

Standardization helped to prevent market failure that would have resulted from an otherwise inevitable delay as software and hardware vendors each tried to develop a single uniform way of networking. According to Comer and Droms (1999), a protocol is an agreement about communication that specifies message format, message semantics, rules for exchange, and procedures for handling problems. Without protocols (and even with them), messages can become lost, corrupted, destroyed, duplicated, or delivered out of order. Furthermore, Comer and Droms argue that protocols help the network distinguish between multiple computers on a network, multiple applications on a computer, and multiple copies of a single application on a computer. More detail about protocols is given in 4.5.2 and 4.9.6.

The OSI reference model has been criticized as a dated, theoretical model that took various standards bodies a decade to develop. Some associated standards are theoretical in that they are not yet implemented or never will be. Markets seem capable of implementing the most useful protocols, while discarding others. The seven layered OSI protocol stack is meant as a reference model of the sub-tasks involved in networked computer communications, and not as a description of reality or a definitive taxonomy.

Figure 3-5 shows the OSI model's hierarchical protocol stack. Data from local computer software applications are handed off down the layers until they are transmitted to the remote computer. At the remote end, the process is reversed sequentially so the remote user's software receives the data [Jain, 1999; Covill, 1998, Sheldon, 1998, Socolofsky and Kale, 1991].

Figure 3-5: OSI model of networking layers

The process of data communication begins in the OSI model when a software user at a local machine sends a communication to a user (of the same or different software) at a remote machine. The local software application communicates downward to the application layer (layer seven) of the protocol stack in units of data called messages. Each layer communicates with the layer directly below it, creating a new data unit to be passed on. Therefore, the message is converted, in turn, into a segment, cell, packet, or frame. At layer one, frames are turned into bits. Bits are then encoded into electrical pulses and sent as signals over conduit (bottom of Figure 3-5).

Upon receipt at the remote end, the pulses are decoded into bits, frames, and packets as they are passed from layer one on up to layer seven. At that stage, the application layer (layer seven) sends a message to the software application used by the remote user and data communication has occurred. Importantly, the receipt of the message by the remote device is the technical engineering standard for successful data communication (machine-machine communication). The economic importance of that communication may or may not depend on whether a human operator at the remote end can process the data into useful information. In some cases, the remote computer processes and reduces the incoming data into useful information before a human operator sees it. In other cases, the efficiency and effectiveness of human-human communication is at risk if the sender causes the local application to send too much or too little data for human-human communication to occur.

At each layer in Figure 3-5, the technical sub-tasks accomplished in that layer are outlined, followed in brackets with commonly implemented protocols that operate in that layer. The transport-session layer boundary differentiates the upper layers (application layers) from the lower layers (the data transport layers) as shown on the left. On the sending end, each layer uses a different unit (envelope) to carry the original data (and the overhead it and other units add) to the next layer. On the receiving end, the overhead corresponding to each layer is sequentially stripped off to provide guidance on how that layer should handle the remaining data and overhead, until the data alone passes from the application layer to the software. In this sense, each layer in the receiving computer's stack gets the same data that was sent by the corresponding layer in the sending computer's stack.

The remaining lower layers (data link, network, and transport) help engineers with two technical objectives, to maximize the bits sent by the sending computer, while preventing the signal from swamping the receiving computer's constraints. It is noteworthy that the establishment of protocols helps to conserve engineering talent because an engineering study of each computer at every point on a network is unnecessary. Lower level (data transport) protocols accomplish flow control, error checking, and the grouping of bits into addressable envelopes (frames, packets).

The objective of overall network optimization does not map to any single OSI level. This flexibility can bring important externalities to the network. For example, without any market exchange, an advance in a data compression protocol (accomplished in layer six) can enhance overall network functionality by effectively reducing the number of bits that need to be encoded for transmission in layer one. This in turn reduces the bits, frames, and packets transported downward by layers five through two at the local machine and upwards from layers one to six at the remote machine. The protocol stack creates economic complements at one layer and they multiply through the network. This kind of interrelationship is one of the hallmarks of the more recent network generations.

Each network generation uses the OSI model differently in attempting to simultaneously achieve the four technical networking objectives. Those differences will be briefly explored during the discussion of the generations beginning next in 3.5. Very generally, the layers map onto the four network engineering objectives as follows. The physical layer directly represents the objective of maximizing the rate at which the signal travels though the conduit, the data rate. Results from electrical engineering were used to establish protocols to code bits (data) into electrical pulses (signals) to be transmitted through the conduit. Based upon the conduit's physical capacity, coding and signal technologies work with physical layer protocols to help speed up the data rate. The data rate objective is a function of bandwidth (conduit carrying capacity, the signal-noise ratio, and encoding).

3.5 Six Economic Generations of Computer Networks

Six economic generations of computer networks reduce the crazy quit of details from section 3.4 (four network components, four engineering objectives, seven OSI layers, service primitives, and the many uses of computer networks) into six eras. Each economic generation broadly summarizes the technologically evolving underpinnings of computer networking. It is important to note that the computer networks actually found in Florida agribusiness are hybrids of more than one generation, with countless inter- and intra-generational varieties. Readers who require a more technical (and engineering-oriented) treatment may consult Socolofsky and Kale (1991), Sheldon (1998), Comer and Droms (1999) or a variety of corporate sources such as Cisco (1999), Novell (1999), or Lucent-Ascend (1999).

Babbage and others are credited with early ideas that resulted in the first computer, the post WWII ENIAC. Mainframes, especially those developed at IBM, dominated the computing world until 1971, when Marcian Hoff invented the microprocessor at Intel. In 1975, the same year Sony launched the Betamax video recording standard, Bill Gates showed that the BASIC programming language could operate on a microprocessor. By 1980, Intel was able to place 30,000 transistors on a chip that ran far more rapidly than the original microprocessors did. In addition in 1980, IBM entered the PC market, hoping it would capitalize on the mainframe market share it enjoyed. IBM chose Intel and Microsoft as vendors. IBM made what many would call the worst business mistake of all time when it failed to obtain exclusive rights on Microsoft's software or Intel's hardware.

There are six inexact economic generations of computer networking: time-sharing with dumb terminals, centralized networks, early peer-to-peer LANs and later client-server LANs, client-server WANs, distributed networks, and inter-networks. Each generation represents a simplified model of a complicated technical network. Additionally, while each generation optimizes combinatorial, probabilistic, and variational problems, these classes of network optimization problems are modeled differently than in the telephone network. For example, computer networks can choose "store-and-forward" message or packet switching algorithms instead of real-time, always connected circuit-based switching algorithms of the telephone network. Data, instead of conversations are being transmitted over a computer network. Therefore, technical constraints on a given sized computer network are inherently more flexible than on a similarly sized telephone network. The constraints and optimization objectives each vary according to the size and generation of computer network under discussion, its users, and the specific type of data transmitted.

A variety of network scales, architectures, topologies and communications distances traversed by computer networks are necessarily included in a single economic generation. Within each generation, a variety of software, hardware, and conduit have been implemented to loosen network constraints. The discussion will go from simplest to most complex and earliest to most recent. Two points should be noted. First, there can be considerable variation within generations due to technological innovation. Second, variation between generations can be subtler than portrayed.

Figure 3-6 makes use of a product life cycle to frame the rough historical era of each generation. Unlike animal generations, however, the gestation period is shortening at an increasing rate as a function of technological change. All six generations share three characteristics. First, each generation does not die, but becomes part of its successor. Thus, while dumb-terminal, centralized, time-share networks are ancient history (relative to computer history at least), a popular modern form of inter-network uses outsourced application servers in a thin client star network.

However, this introduces the second point, that the quality and characteristics of networking goods are not necessarily comparable through time. Today's thin client may not have disk space or run applications locally, but is capable of displaying high-level graphics that would have swamped the CRT screens and 300 baud modems of three decades ago. Third, the s-curves in Figure 3-6 depend on successive generations, each with new physical limits [Afuah, 1998, pp. 120-125]. New physical limits ushered in by a specific generation are often backward compatible with preceding generations.

Figure 3-6: Six economic generations of computer networks plotted as life cycles in time

3.5.1 Time-Sharing Networks and Dumb Terminals

Time-sharing networks are the first economic generation of computer network. These networks are throwbacks to the time when computers were expensive, large machines that required their own controlled atmospheres and experts. Time-sharing network hardware consisted of a single mainframe computer and a number of individual links to dumb terminals. Dumb terminals had no memory or storage capabilities so they were slower at sending and receiving data than the single mainframe they were linked to. Printing of text and graphics could be done only at the mainframe. Data flowed from a dumb terminal's keyboard over primitive conduit to the mainframe where computations were performed and print jobs were executed.

The terminal's screen could display text responses, but special keyboard control characters had to be used to prevent the display from scrolling faster than the terminal operator could read it. When receiving input from remote terminals, transmission was asynchronous so that the mainframe on the other end received characters one at a time as they were typed, serving to hamper the maximum data rate. Most data entry was in the form of punched cards that had to be taken to the mainframe to be read by special card readers. Asynchronous transmission protocols were developed to include a start bit, stop bit, and optionally, a parity bit for each character adding to the amount to be transmitted.

Computer engineers of the time concentrated on developing faster central processors and more programming flexibility instead of on improving data communication. Data communication was performed via punched cards and magnetic media instead of over physical networks. The early lack of emphasis on data communication was partly because early networks that needed long connections (outside the building where the mainframe was located) relied on noisy analog telephone local loops and analog connecting networks. Another reason was that data rates were 300 bps or slower.

Time-sharing networks were often not normally owned by the companies that used them. Instead, they were early examples of outsourcing, where access to the network, connection time, and processor time were each billable items. Current examples most similar to time-sharing networks include POS (Point-of-sale) terminals at gas stations and stores or ATMs that perform specific tasks (such as credit card verification or cash withdrawal) using extremely simple terminals that share a central processing system.

3.5.2 Centralized Networks

Centralized networks are the second economic generation of computer networks, as shown in Figure 3-7. While similar to time-sharing networks, there were several important differences. The first difference between time sharing and centralized networks was that the central computer (or minicomputer host) was typically owned by one company instead of shared. As time passed from the 1960's into the late 1970's, scaled-down mainframes (minicomputers) became more affordable for large and then medium-sized firms. Second, centralization differed from the previous time-share network in that up to 100 concurrent users could share overall computing capacity through remote batch processing and limited local processing.

Figure 3-7: Generic example of a centralized computer network

The main processor (or host computer) was centrally located (at headquarters, HQ) with remote and local terminals (A through I in the figure) connected via direct links. The term centralized means that databases and the processor were kept at the host location. While later centralized networks featured remote microcomputers linked to a mainframe or minicomputer, the bulk of capacity still rested in the central host computer.

Communication volume became more synchronous (full duplex) and bi-directional (half duplex) than it had been under the simplex time-sharing regime. The centralized configuration permitted some terminal-to-terminal message traffic and local printing of jobs. Centralized computing also allowed for some limited storage and computing on some of the remote microcomputers attached to the network. Centralized computing replaced time-sharing and dumb terminals when mainframe prices fell and 8086 and 8088 PCs first became available so that an organization could own (rather than rent) its computer network. Each terminal had a limited ability to share resources of the host.

Centralized connections were accomplished locally via coaxial cable or via low-speed dial-up (dedicated or on-demand) for long connections. Conduit originally used in these systems could transmit over telephone lines at 300-1200bps to remote terminals while early coaxial cable reached local machines at a faster data rate. The original centralized networks were stand-alone networks that restricted benefits and ownership of the network to a single organization at one or more locations.

Under centralized computing, flow control and error detection were especially important because dial-up connections and local conduit were subject to interference and noise. A single interruption due to noise or interference in a large and lengthy transmission could require that the entire contents be retransmitted, often with no better results. File and application sharing began to heighten the need for data communication.

Centralized networks were never phased out of many organizations. Often, as an organization upgraded computers, a minicomputer-based centralized network remained operational as a secure, separate "legacy" network. Newer PC-based network designs relied on more user-friendly software to accomplish simpler tasks. The cost of reprogramming existing specialized centralized networks to function on less-powerful microcomputer networks was high.

Examples of centralized computers include large retailer POS cash registers, airline reservation systems, and teller and loan officer systems in banks. To enable the host to be shared among all users, message switching was used instead of circuit switching. Under early message switching schemes, the host was able to "store and forward" instructions from terminals and messages from terminal to terminal, so as to avoid congestion upon transmission (such as "busy signals") in the telephone network.

3.5.3 Early LANs: Peer-to-Peer Networks

Early LANs (Local Area Networks) are the third economic generation of computer network and would evolve into the first of two client-server generations. LANs are used to connect computers to a network within a single office, floor, building or small area. The scientific basis for LANs is the locality principle that states that computers are most likely to need to communicate with nearby computers rather than distant ones [Comer and Droms, 1999].

LANs began to be used in the early to mid 1970s to allow groups of computers to share a single connection in a larger centralized network. LANs are owned and readily managed by a single organization. They are useful to an organization seeking to share hardware, applications, data, and communications within a local area. Many LANs were peer-to-peer networks where most stations on the network had equal capacities for processing and display.

However, the sharing of a single link among multiple computers required a way to prevent two or more computers from transmitting at the same time (collision). Several arbitration mechanisms called MACs (Medium Access Controls) were developed within the OSI data link layer to avoid interruption of one computer's transmission by another's. Early LANs used ALOHA and slotted ALOHA, low efficiency protocols with collision rates of at least 26% [Jain, 1999]. The way collisions are handled differs by protocol and protocols are implemented in topologies. The IEEE and ANSI (American National Standards Institute) created standard 802.3, Ethernet which recently became (and continues to be) the most popular LAN standard.

LANs are connectionless services so that once a computer gains access to the network, it puts packets on the network, but has no assurance that the distant computer gets them. LANs allow unicast, multicast, and broadcast messages so that a single transmission may be sent to a single network user, a subset of users, or all users simultaneously. LANs connect stations to the network including computer workstations, printers, and other hardware. Users could then share resources within the LAN. LAN protocols use the physical and data link layers of the OSI protocol stack.

LAN topologies define how network devices are organized logically. Figure 3-8 shows three early LAN topologies: local bus, ring, and star. All three are connectionless services. Bus topologies (which work with early Ethernet standards) use a short dedicated connection (AUI cable) to a single shared conduit. Original Ethernet wiring was thicknet, a heavy coaxial cable (10Base-5) with a maximum segment length of 500 meters as measured by the length of conduit, not by the direct distance.

Later, Ethernet cabling was thinnet, a thinner coaxial cable (10Base-2), routed directly to a BNC (T-shaped) connector on each station. Thinnet's capacity was several times that of thicknet's with a maximum segment length of around 186 meters (607 feet). 10Base-2 has very specific limitations to the total number of stations on a network. A maximum of 30 stations per segment are allowed, with trunks up to five segments long (two of which must be for distance) for a total trunk length of 910 meters (3,035 feet). Long trunks needed repeaters (devices that amplified the signal). Thinnet is cheaper than thicknet, adding to its popularity.

Figure 3-8: LAN topologies

Thinnet and thicknet were expensive and the larger the network, the greater the likelihood of collisions due to delay. Repeaters helped extend maximum segment length by boosting signals, bridges helped filter traffic to avoid congestion and collision. Switches further increased throughput and design performance. However, these extra devices were expensive until the 1990's. That expense, combined with relatively high cable costs is why LANs are local.

Of the three LAN standards, Ethernet's bus topology has the advantages that fixed delay is almost zero, it is a simple protocol, and stations can be added without shutting the network off. However, it allows no traffic priorities, and (while better than ALOHA and slotted ALOHA) has a high possibility of collisions, which seriously hurts throughput at high utilization.

Under the early star topology, each computer had a separate connection to a hub or switch that receives messages from the sending computer and then sends them to the receiving computer. Hubs are physical layer devices used to connect multiple workstations, each with a dedicated cable. Ring topologies had no central component. Instead, connections go from one computer to another as point-to-point links. Bits flow in a single direction around the ring so that if there is a break in the ring, all communication ceases. Twenty years ago, the token ring protocol using the ring topology was more popular than the Ethernet bus. Unlike Ethernet, the token ring protocol allows for priority levels of traffic and handles high utilization well without collision. Stations cannot transmit until they seize a signal token that rotates around the network. Then, only the station with the token has the right to transmit thus avoiding collisions.

LANs came to be associated with client-server computing in the 1980's as early LANs became networks of more sophisticated PCs. First, the discussion will focus on the earliest LANs. Then, onto LAN topologies and the evolution of how new OSI protocols were developed to help LANs evolve further, as networked PCs ran increasingly communication intensive applications. Then, the early LAN client-server model is presented. Later generations of LAN technologies are presented along with the second generation of client-server networks, WAN-LAN client-server networks.

As the microcomputer gained prominence, many organizations began to implement the client-server model into their LANs. Client-server networks evolved from the centralized network model with servers replacing hosts and client PCs replacing less powerful terminals. However, many LANs were implemented with UNIX workstations or other minicomputer and mainframe terminals. A server computer at headquarters could be either a mainframe or a specialized server (micro or mini computer). Servers were connected (directly and indirectly) to groups of client computers (typically microcomputers) at numerous locations.

Specialized servers such as file servers, print servers, and database servers came to be used as part of LANs. File servers allowed users to share access to network files on a single, powerful machine with ample hard disk storage. Printer servers ran special spooling software to establish printer queues, removing background printing burdens from local PCs. Computer servers were used so users could share a high memory computer dedicated to performing complex computations. The use of specialized servers helped lower overall costs of hardware because most network users could have simpler PCs (or workstations) on their desktops and use high-powered machines only when needed.

3.5.4 Later Client-Server Networks: LANs and WANs

As a local network (LAN) within an organization's building, the client-server group offered a reduction in cabling requirements. Client-server networks also pioneered the shared use of individual server computers (such as F in Figure 3-9) that performed a specialized task for all users. As a wide area (WAN) or campus network, the client-server model permitted fewer "long" connections while keeping all users on the network.

Figure 3-9: Early example of a client-server computer network

For WANs, the client-server design allowed local transmission of messages within a group ({A, B}; {C, D, E}; {G, H, I}) without using distant connections to HQ. Network hardware and software was typically owned by the organization along with local cabling. Client-server computing took advantage of the so-called 80-20 rule (80 percent of traffic was local and only 20 percent long) to save on connection costs and take advantage of greater available bandwidth (capacity) through local connections than via long connections.

Early client-server computers were also stand-alone networks, owned entirely by a single organization to allow internal data communication. Most client-server network software allowed some local e-mail, security, and backup functions using user-friendly PC packages such as Novell NetWare instead of requiring programmers write more complicated and costly high-level program code. Client computers can share the resources of the server including the processing power of specialized servers. In the client-server model, several computers are grouped together to share transmission links and local resources.

WAN and later LAN client-server networks are the fourth economic generation of computer network. WANs have two important advantages over LANs. First, is the ability to span longer distances than LANs. Where LANs are capable of transmitting only a few thousand feet, WANs are capable of spanning thousands of miles. The second advantage is scalability. With a LAN, the addition of a new station or group of stations can be a difficult, time-consuming and even impossible task. It is typically easier to connect another LAN to an existing WAN than to add a station to an existing LAN.

Later generations of LANs saw the locality principle (where computers are assumed only to need to communicate with nearby computers) evolve into and 80/20 rule which stated that eighty percent of traffic was local, the rest distant. Later LAN wiring saw the elimination of expensive thicknet and the decline of thinnet in favor of twisted pair Ethernet. This wiring called 10BaseT, resembles a telephone line cord. 10BaseT plugs into a network card slot on each computer much like a residential telephone jack plugs into a telephone. Each 10Base-T connection goes from a single workstation to a hub or switch. 10Base-T is capable of transmitting up to 10 Mbps over the copper twisted-pair line over longer distances than thinnet. While original Ethernet used a bus topology, later LAN Ethernet employs a star-configured bus topology (see Figure 3-8). This topology is physically a star topology, but logically a bus topology.

10Base-T specifications require the use of category 3, 4, or 5 UTP (Unshielded Twisted pair cable. Due to signal loss, the distance from station to hub cannot be over 100 meters (328 feet). Up to twelve hubs can be attached to a main hub or to coaxial or fiber backbones so that the maximum number of stations on a LAN can reach 1,024 without using bridges.

A more recent development was the introduction of Fast Ethernet. Fast Ethernet utilizes the 100Base-T specification is capable of transmitting 100Mbps over enhanced Category 5 cable (and other cable types) over still greater distances. Over fiber optic cable backbones, up to 10Mbps may be transmitted up to 4 kilometers [Sheldon, 1998]. Because the variety of less expensive cabling than fiber optic varies by type, Fast Ethernet's distance limitations tend to be less than 4 kilometers, on the order of 10Base-T. To ensure proper timing, the hub to station distance cannot exceed 100 meters with 100Base-T, while the total distance between any two points cannot exceed 250 meters using twisted-pair limitations.

New category 6 (200 MHz bandwidth) UTP wiring is becoming available in the market that should increase the capacity and distance of transmission to handle faster Ethernet networks. Category 6 (Class E) will require high performance RJ45 jacks, special training, and new standards to become a reality. Also in the pipeline is Category 7 (600 MHz bandwidth) twisted pair. However, Category 7 is a shielded twisted pair (STP) cable that cannot use existing RJ45 connectors and does not have an established standard. Category 7 STP may end up becoming surpassed by fiber optic cable to the desktop [RW Data, 1998, p. 6].

Client-server WANs can use frame relay protocol, SMDS, and ATM. All three of these connection-oriented protocols created a need for businesses to lease point-to-point or network level connections to support such higher OSI layer protocols. The private data networking market was born. It is covered in more detail in 4.8.

Connection-oriented networks are better for real-time applications or those that cannot handle packet resequencing and offer the ability to reserve bandwidth (capacity) along with hierarchical (network layer) addressing. However, connection-oriented networks use static path selection. Hence, a failure at any point along the static path can cause transmission failure. Bandwidth is often inefficiently allocated in a connection-oriented network because network resources require more capacity than they use.

3.5.5 Distributed Client-Server Networks

Distributed client-server networks are the fifth economic generation of computer network. These networks began to be seen in the early 1990's, when they were often appended to existing client-server networks. As more users and more traffic put pressure on shared resources of the client-server design, a new network design became needed to reduce congestion. By distributing both processors and databases at different locations, a single network could serve more users simultaneously.

Figure 3-10 depicts an example of a distributed network. In a typical configuration, the organization owns all client computers and owns or leases the servers. Multiple platforms are tied together using full-time connections over leased lines. Software automatically adjusts according to traffic patterns by routing traffic over the cheapest connection. All users share all resources, though typically databases and other resources are duplicated (distributed) through the system.

Distributed networks offer several advantages over client-server networks. Among those advantages are redundancy and fault tolerance. If the sole server fails (or is overloaded) in a client-server network, users may be unable to exchange data. With the distributed configuration, data that is on one server may be "mirrored" by an exact real-time copy on one or more other servers. That way, should a single server (or even two or three) fail, there is enough redundancy to achieve necessary performance. Additionally, distributed networks offer a greater number of paths for communication to occur so the network can weather problems with connections going down and traffic can be routed around congestion. Furthermore, specialized servers (such as server C in Figure 3-10) take advantage of the benefits of specialization on a larger scale than was possible with specialized client computers in the client-server model. Real-time mirrored servers can take advantage of the need for redundancy and fault tolerance for so-called "mission critical" tasks. Since global redundancy and fault tolerance throughout the network can be expensive to achieve, portions of the network can remain as they were.

Figure 3-10: Example of a distributed network with three client-server sub-nets and one specialized server

Users share resources in several ways. Each user in a group shares transmission lines to distant servers and hence is able to share distant server resources simultaneously. Users in a local group are able to share local resources. All users are able to share specialized servers over the WAN as well. Under the message-switching concept, network engineering allowed for transportation of unequally sized large blocks of data from point-to-point in the network. Special switching algorithms and devices called routers came on the scene that were used to regulate traffic and prevent large blocks from a single point from dominating the network to the exclusion of other users. Under message switching, routers had to buffer long blocks on disk. The conduit itself was often full duplex, capable of transmitting in both directions at once. Full-duplex circuits reduce the line turnaround time delay inherent in half-duplex lines.

Packet switching began to supplant message switching as networks became more interactive and more complicated. Packet switched networks broke up ill-defined blocks into discrete equally sized packets. The first packet can be forwarded before the second packet arrives, cutting delay and enhancing throughput. New routers were designed to buffer packets in memory rather than only on disk.

3.5.6 Inter-Networks

The sixth stylized economic generation of networks, the inter-network, (root of the term Internet) was born from the distributed network. Figure 3-11 shows an inter-network of three separate computer networks. In its most general form, an inter-network is simply a network of networks. As shown in the figure, two distributed networks (networks 1 and 3) are linked with a third centralized network (network 2). The important ideas here are scalability, compatibility, interconnection, and addressability. Until now, networks have been considered as standalone configurations providing direct benefits only to those connected. An inter-network permits the adding of previously constituted networks to existing ones so that the scale of the network is expandable. Scalability had depended on the compatibility of software and hardware among the networks to be added. However, an important part of the inter-networking configuration rests on being able to combine networks that were previously incompatible by using cross-platform protocols and gateway computers, such as (1) and (3) in the diagram, to permit communication between different systems.

Figure 3-11: Example of an inter-network with two distributed client-server sub-nets and one centralized network

Inter-networks can carry a mix of connection-oriented and connectionless traffic. This is especially important because both dynamic and static path selections can be made available to users. This allows connection costs to be lowered dramatically. Instead of using dedicated point-to-point connections, virtual circuits and packet switched network cut down on the number of connections to be paid for. Hence, agribusinesses with LANs at several locations (nationally or even internationally) could afford to connect them in a single WAN inter-network.

Users of inter-networks share resources of their own group, own network, and all interconnected networks. Inter-network design allowed dynamic bandwidth allocation and packet routing. A single data transmission could be split up into packets with individual packets taking different paths through the network if necessary. Figure 3-11 also illustrates an important kind of computer called a gateway which "acts as a translator between two systems that do not use the same communication protocols, data formatting structures, languages, and/or architectures" [Sheldon, 1998, p. 432]. Individual addresses such as 3.B.2 and 1.D.1 in Figure 3-11 helped in administration and security of inter-networks. However, the old 80/20 rule was now reversed. The majority of network traffic became non-local in nature, requiring greater bandwidth (capacity) and throughput.

Today particular kinds of inter-networks are often distinguished by the terms intranet, Internet, and extranet. An intranet is an inter-network that connects computers within a firm and is used for secure internal communications. The Internet is a public inter-network familiar to users of e-mail and the World Wide Web (WWW). The Internet uses the TCP/IP set of protocols. An extranet is an inter-network that connects an organization with suppliers, customers, or other stakeholders.

3.6 Operations Research and Hypercommunication Network Form

Hypercommunication network engineering is based on three sources. One hundred years of experience with telephone network design and fifty years of computer network design are the first two sources. Additionally, a macro network literature developed during the 1970s and 1980s in operations research based on algorithmic breakthroughs that came from overlaps between telephone and computer network engineering research to yield the foundation of hypercommunication networks [Hillier and Lieberman, 1990, p. 333].

This third source of engineering and architectural fundamentals for hypercommunication networks is the operations research (OR) literature. OR's multi-disciplinary scientific framework has been used to further understanding of problems from telephone networks, computer networks, and macro networks. The rich OR literature has expanded the telephone and computer network literature into a more general literature capable of designing, operating, and optimizing the interconnected mesh of data and voice networks that makes up a converging hypercommunication network. Indeed, the technical breakthroughs in this area are so vast and proceeding at such a rapid pace, that it is only possible here to discuss a few general trends.

It is noteworthy that OR's general perspective results in a positive synergy between telephone and computer network design. The more cost-efficient hypercommunication network design provides a greater volume of communication and choice of message types than would be obtained by simply combining computer and telephone networks. The hypercommunication network has resulted from new technologies that have lowered communication costs while providing dramatic increases in capacity. However, while the hypercommunication network has pushed back binding technical constraints, it has also brought forward new issues that are solved through OR algorithms.

OR models have facilitated the convergence of telephone and computer networks by providing a general networking framework. Such a generalized view is important in merging those networks (along with related networks such as wireless, paging, etc.) into the unified hypercommunication network. Much of the terminology came from graph theory in discrete mathematics [Skvarcius and Robinson, 1986]. OR also uses electrical engineering, computer science, and economics as source disciplines.

The OR orientation defines a network as being comprised of nodes (also sometimes called points or vertices) and arcs (or edges or branches). A network with n nodes could have as many as arcs [Arsham, 1999]. If the flow through an arc from one node to another goes only in one direction, it is said to be an undirected arc, if the flow is two-way it is a directed arc, better known as a link [Hillier and Lieberman, 1990, p. 336]. There could be as many as two times as many possible directed arcs as undirected. A path is a sequence of directed arcs that connects nodes with no intermediate nodes repeated. Paths are called directed if and only if all arcs in a sequence are directed towards the destination node from the origination node. Undirected paths are a sequence of connecting links where the flow is from origin to destination, but all sequences of connecting arcs need not point towards the destination. A path that starts and ends with the same node is called a cycle. The entire network is connected if every pair of nodes is connected.

The OSI network hierarchy is more easily traversed by OR-inspired hypercommunication networks than by telephone and computer predecessor routing hierarchies. Recall that the telephone network and computer networks each featured combinatorial, probabilistic, and variational kinds of problems. By merging voice and data traffic into a common hypercommunications network, the dimensions of the three technical problems are changed. The resulting network is capable of greater technical efficiency. Just as the hypercommunication model resulted from the combination of the interpersonal and mass communication models, so too does the hypercommunication network result from the technical interplay of the telephone and computer networks.

As OR studied increasingly complex networks, new combinatorial algorithms were discovered that permitted network designers to understand how to pack more communication traffic onto the available paths of a complex network. Euler's answer to the Koenigsburg bridge problem was an early example of how to approach the traversibility of a network to find a single route that would connect each node while crossing each arc exactly once [Miller and Heeren, 1978, p. 327-328]. Such traversibility problems were a special case of combinatorial problems in network design that allow efficient message distribution from a single sender to mass recipients.

The advent of computers allowed computers to produce reducible configurations of complex arrangements that could not be otherwise categorized as in the case of the Appel-Haken four color map theorem [Appel and Haken, 1977]. In that case, only with a computer were mathematicians able to prove an 1879 theory that any possible map of bounded areas required a maximum of four colors. Graph theoretical rules were used to enumerate the total number of possible map configurations to 1,936 from over ten billion logical calculations. Statistical and mathematical work in OR was used to program computers in a similar way to reduce network design combinatorial problems (such as the packing problem) into the simplest form.

OR is useful with new generations of probabilistic problems of network design and optimization that came from larger, more sophisticated voice and data networks. Traffic over computer networks that previously could not have been managed due to the large numbers of nodes, links, paths, cycles, and message types could now be modeled and estimated using computer analysis of known multivariate probability distributions.

More specifically, Table 3-3 summarizes how OR has been used to combine the three classes of problems in telephone transport networks with the four computer network engineering design objectives into a set of technical algorithms for a converged hypercommunications network. In reality, many of the algorithms listed cannot be compartmentalized as neatly as shown in the table. The purpose of the table is only to show the range of OR algorithms and how they fit together. No attempt will be made to explain individual algorithms here since far better treatments can be found elsewhere [Corman, Leiserson, and Rivest, 1994; Lawrence and Pasternack, 1998; Arsham, 1999].

Combinatorial class problems involve the geometric structure and topology of a network originated in telephony from finding the full set of possible paths a telephone call could take over the transport network. The packing problem becomes exponentially more complicated for packet-switched and cell-switched networks where individual calls or data transmissions are split into parts. Further adding to complexity, separate calls or data transmissions share physical connections via statistical multiplexing (where a single physical link is shared in space or time with other users). Multiplexing comes up again in 4.2 when QOS and transmission are covered.

Table 3-3: OR algorithms and convergence of telephone and computer network optimization
Three problem classes of telephone networks (columns) Combina-torial (packing) Probabilistic (traffic circulation) Variational (routing)
Four computer networking design objectives(rows)
1. Sending rate control (each node) Interior point algorithms Burstiness models
2. Receiving flow control (each node)
3. Signal modulation rate optimization (each arc) Non-statistical multiplexing algorithms
4. Network optimization (all arcs & nodes)
a. minimize average delay of entire network Multi-path routing Queuing theory
b. minimize largest delay in any network segment Multi-path routing Distance-vector (Bellman-Ford algorithm) Link-state, Floyd-Warshall (shortest path algorithms)
c. maximize traffic over capacity Non-statistical multiplexing algorithms Arc capacity perturbation analysis Tolerance analysis
d. minimize NTS (non-Traffic Sensitive) costs Drop algorithm Statistical multiplexing & Time-to-arrive Minimum cost net flow models
e. minimize TS (Traffic Sensitive) costs Greedy drop algorithm Statistical cost perturbation analysis Activity-duration

Sources: Corman, Leiserson, and Rivest, 1994; Lawrence and Pasternack, 1998; Arsham, 1999.

The probabilistic problem of traffic circulation is similarly made more difficult when computer networks are combined with voice networks. Burstiness models are used to take the ratio of peak to average traffic between points so that sending, transmission, and receiving rates will be adjusted appropriately for the message primitive and traffic type. Many kinds of statistical and probabilistic algorithms are used to minimize TS (Traffic Sensitive) and NTS (Non Traffic Sensitive) costs, decrease delay, and maximize traffic efficiency over partial and entire networks.

The variational class of problems focus on how switching structure and message type interact dynamically. While each of the three classes is important, the variational class is particularly important in a variety of network optimization tasks such as queuing delay, propagation delay, throughput, and congestion avoidance in the converged hypercommunications network. Variational classes of problems usually involve algorithms with nested combinatorial and probabilistic algorithms.

Table 3-4 summarizes the main technical characteristics of the networks covered in Chapter 3.

Table 3-4: Comparison of technical network characteristics
Network Hardware Levels Conduit Function
Telephone connecting system Individual phones, switches ? 5 Copper & Fiber Analog telephone calls (4 kHz)
Telephone Transport CO equipment ? 4 Fiber Digital telephone transmission (56-64 kbps channels)
Time-share network Mainframe, dumb terminals (leased) 2 Copper Share cost of computing among many firms
Centralized computer Mainframe, dumb terminals (owned) 2 Copper Data rate to 14.4 kbps
Client-server LAN Server & PCs or workstations 2 Short-range coax, STP, or wireless Local data communication & peripheral sharing
Peer-to-peer LAN PCs (Tick clients) 1 Short-range coax, STP or wireless. Local data communication, peripheral sharing, collaboration
Client-server WAN Server and PCs or workstations ? 2 Leased lines or dedicated connection Data communication over distances, collaboration
Distributed network Servers, PCs, workstations, some multiple platforms, thin clients ? 3 Leased lines or dedicated connection Local & distant data communication, peripheral sharing, collaboration
Inter-network Multiple platforms: servers, workstations, PCs, thin clients & devices n All, virtual circuits All

The technical network characteristics shown in Table 3-4 play a further role in 4.8 where they are used to separate private data networking and the Internet from the PSTN and enhanced telecommunications. As will be seen then, the advanced features of the most recent generations make hypercommunications possible.

3.7 Network Economics

There are many sides to the network economics literature. Many of these have already been discussed in Chapter 2 and earlier in Chapter 3 including weightless economics, new economics, and the path dependence school. Also on the list are Internet economics (McKnight and Bailey, 1997), new evolutionary school views of dynamic market externalities (Keibach and Posch, 1998), and a new view of market structure in the network age (Varian, 1999).

Yet, while the network economics area is a broad and exciting area, researchers have produced an incredible variety of literature, much within the past five years. For example, one online bibliography on network economics is over sixty-six single-spaced pages [http://www.stern.nyu.edu/networks/biblio.html downloaded 9/2/99]. Nowhere in economics is the issue of time compression so important as in the network economics field. While possible future "seminal papers" wait for academic review and journal publication, they often are already being tapped by academia, industry, and government.

The hypercommunications network (especially the Internet) has produced new forms of academic communication that are faster, cheaper, more interactive, and have greater freedom of entry and fewer gatekeepers than the standard technical or economic journal. Varian argues that now the "unit of scholarly communication is the thread. The thread allows interaction, shortness with detail behind it, etc. " [Varian, 1996, p. 51] However, the thread is not without problems.

Unlike traditional academic literature, the new Internet "literature" demands careful scrutiny for several reasons. Citations can be troublesome. In every case a URL (Uniform Resource Locator, or web address) can be used, together with date, time, file type (such as HTML), and title. However, new versions can appear overnight and pagination (unless an Adobe PDF or other standard is used) varies by monitor, printer, web browser, and version.

Most departments and many faculty members offer online working papers with numerous public depositories such as the Washington University economics working paper web site, http://econwpa.wustl.edu/wpawelcome.html. However, because of interactive websites, web address changes, hosting equipment and software changes, and due to QOS (Quality of Service) issues, today's web citation is tomorrow's "404 file not found".

Another reason for scrutinizing Internet literature and scholarly communication threads is source credibility. Because the cost of participation is lower, access is anonymous, and much of the posted information is probably not read, no imprimatur of quality or academic approval has been stamped on the online network economics literature.

With an introductory warning out of the way, a bare introduction to network economics can be developed. In 3.7.1, several economic fundamentals about networks are given. Next, the topic of positive network synergies or effects (also called externalities) is considered in 3.7.2. Negative network synergies and effects are the subject of 3.7.3. Sources of externalities are explored in 3.7.4. Finally, sub-section 3.7.5 will examine the implications of network economics for firms such as agribusinesses.

3.7.1 Economic Fundamentals of Generic or Macro Networks

The network literature typically distinguishes between the properties of specific (micro) networks from the properties common to all networks (generic or macro networks). Generic networks include a variety of structures from oil distribution, electricity, and commodity flow networks to hypercommunication networks. Table 3-5 summarizes six sets of properties to yield eighteen economic fundamentals shared by all networks.

Table 3-5: Economic properties of generic and hypercommunication networks
Author and property Generic network economic implication Hypercommunications network implication
Fundamental technological characteristics, David (1992)
Capacity indivisibilities Increasing returns (decreasing costs) Deregulation (Ch. 5), falling prices (6.4)
Benefits to single users depend on accessibility of other users Demand externalities High per subscriber valuations fall with time (6.3), Incentives for early subscribers (6.4), Installed base
Compatibility Supply externalities Path dependencies (3.2.2, 3.7.3), standards battles (4.5.2)
Factors common to all networks, Crawford (1997)
Capacity to produce cannot be stored Network must be designed for peak load Dedicated vs. shared circuits (4.3, 4.7.3), packet or cell switched vs. circuit switched (4.7.4, 4.8.2, 4.8.3), Line consolidation (3.3)
Net flow vs. total flow Node to node transfers can be perfect substitutes or perfect complements Bandwidth arbitrage argument
Frictional or line loss Not everything sent from one node to another is received In-band vs. out-of-band signaling. T-1 vs. ISDN-PRI (4.3.2, 4.7.3, 4.7.4), attenuation (4.3.2, 4.4.1)
Self-powering Costs reduced, possible self-sufficiency DC-powered devices (certain telephones) work in power failures
Multiple units & capacity measures Confusion in pricing Confusion in pricing: TS, NTS, QOS, and bandwidth (4.2)
Economic ownership, Liebowitz and Margolis (1994)
Ownable Costly to configure, has physical parts Physical infrastructure (4.3.2, 4.3.3, 4.4, 5.2.2)
Metaphorical or virtual Inexpensive to reconfigure, few physical parts Virtual circuits (3.4.2, 4.8.2), architecture (3.2.2)
Transparency, MacKie-Mason, Shenker, and Varian (1996)
End-to-end non-transparency Service provider provides link among nodes, not aware of what user receives Application-blind telephony, data network carriers, ISPs (4.6-4.9)
End-to-end transparency Service provider knows the content or quality of flows among nodes Application-aware, value added services (4.9.3-4.9.10)
Network developmental stage, Noam (1992)
Cost-sharing Private Private data networking (4.8)
Redistributory Mixed, regulated Traditional telephony (4.6)
Pluralistic Mixed, un-regulated Internet (4.9)
Layered hierarchical structuring, Gong and Srinagesh (1997)
Intellectual(high level) Intelligent networks Content & value-added networks
Physical (lower levels) Unbundled bearer service, common carriage Access level, wireline (4.3), wireless (4.4) transmission
Network layers Virtual network hierarchies Virtual layers, OSI except physical level (3.4.4)

The third column of Table 3-5 points out several implications specific to hypercommunication networks and points to locations in the text where these are discussed.

The first of the six sets of economic fundamentals are from David (1992), who mentions three fundamental technological characteristics of "generic" networks from the economics literature. The first of these is that production and distribution facilities have significant capacity indivisibilities so that "over some range of operation there are increasing returns (decreasing costs)" [David, 1992, p. 103]. Second, benefits available to single users depend upon how accessible other users are, "implying the existence of externalities on the demand side of the market" [David, 1992, p. 103]. Third, there are issues such as interconnection and standards resulting from technical characteristics of networks. In this way, according to David, economics reacts to technical developments (rather than the reverse):

Technical (and hence the economic) performance of the network involves interconnectedness, which, in turn, requires that some minimal level of compatibility (and inter-operability) be assured among the system's components either by "standards", or "gateways connecting otherwise isolated systems. [David, 1992, p. 104]

The next set of economic fundamentals of generic networks includes five economic properties mentioned by Crawford (1997). One property of networks is that "their capacity to produce cannot be stored, so capacity unused today cannot be saved for use tomorrow. Note that the storage of a network's capacity to transmit is distinct from the storage of objects transported over the network" [Crawford, 1997, p. 393]. Since "the good transmitted on the network itself may be storable" or have a store-and-forward capability there are several implications for hypercommunication networks. These include the requirement to design hypercommunication networks for peak capacities and the rationale for shared versus dedicated circuits. Other technologies such as packet switching and multiplexing serve to limit peak loads, often with insignificant unit adoption costs.

Crawford's second generic network property is that there can be important differences between net and total flows. In "commodity" networks, one unit transferred from node to node is a perfect substitute. However, in communications, "receiving mail (unless there is cash in the envelope) or phone calls intended for another party is typically useless for both the sender and the recipient" [Crawford, 1997, p. 393]. Yet, in the case of broadcasting, watching the President speak on one channel as opposed to another is almost a perfect substitute. This property is important in the pricing of converged networks where bandwidth arbitrage could exist. However, while digitization does make one bit seem to be the perfect substitute of another, the message primitive, sender, receiver, and time urgency prevents bit or bandwidth arbitrage from being a global condition given today's technology.

Crawford's third property is frictional or "line" loss. In data networks, "We may think of the bandwidth used to carry header data as a frictional loss" [Crawford, 1997, p. 396]. Indeed, noise in general can be thought of as a frictional loss due to conduit properties. As sections 4.3 and 4.4 will show, line loss in the form of attenuation is a common barrier to wireline and wireless voice and data communication. Attenuation is more of a problem with digital communication because of the sensitivity to error involved. Attenuation prevents both wireline and wireless signals from traveling longer than certain ranges so that this is particularly important in rural hypercommunication networks.

Crawford's fourth property is whether the network is self-powering. "If the material used to overcome frictional losses is the same as the matter transported on the network, the network is self-powering" [Crawford, 1997, p. 396]. While hypercommunication networks are not normally self-powered, there are important QOS (4.2.4) exceptions. For example, most modern business telephone systems do not work if the business loses power, though DC-powered telephones receive DC power through the line and can continue to function.

Crawford's last property (multiple capacity measures) is especially important to hypercommunications. In hypercommunications, pricing methods abound because there are multiple units of measure (bits, minutes, distance) and multiple capacity-related measures (storage, bandwidth, throughput, data rate). Section 4.2 covers bandwidth and QOS to demonstrate the complexities involved.

The next set of economic properties of networks concerns economic ownership. A network may be either ownable or metaphorical [Liebowitz and Margolis, 1994]. An ownable network is costly to configure partly because it has many physical parts. Ownable networks (such as the PSTN and CATV networks) can easily exclude non-paying customers through disconnection. Because of the physical parts, there are substantial capital investments and well-defined property rights. Metaphorical networks such as all English speakers or Chevrolet owners are based on direct interaction, but not physical connections. Ownership is difficult, because it is hard to exclude non-paying customers.

Virtual networks lie between ownable and metaphorical networks. They are like ownable networks in that exclusion is possible, but because there is often no real physical component, pricing is below the analog service offered in a true ownable network. If a business has a single Internet e-mail account, it may establish numerous virtual e-mail addresses. Similarly, with a single telephone line, many DID numbers (virtual telephone numbers) may be established. Calls and e-mails to the virtual numbers or addresses are routed to the physical connection at a miniscule cost compared with adding physical connections for each number or address. When a business hosts its own domain or has its own PBX telephone system, the cost of adding an additional extension or address is negligible. The virtual network gives the appearance of a physical network, but for a low cost. Virtual circuits are one example of how businesses use virtual network relationships to slash their bills compared with a dedicated, fully-owned network.

MacKie-Mason, Shenker, and Varian (1996) point out another property of networks, transparency. The transparency of a network has to do with whether the firm that provides network access is aware of what individual users receive from being connected to the network. In some architectural configurations, the service provider simply acts as the link between network nodes without having any control over what the network transmits. In other cases, the service provider knows (generally or precisely) the content or quality of the transmitted matter.

A communication network can be application-blind as is the case with the Internet or other "common carrier" networks, or application-aware as with cable TV or online service networks [MacKie-Mason, Shenker, and Varian, 1996, p. 2]. Among application-aware networks, (especially in the area of information networks, but not exclusively so), some networks are content-aware or content-blind.

Another set of economic network characteristics is given by Eli Noam. Noam names three stages of telecommunications network development. Noam's three stages include cost-sharing networks, redistributory networks, and pluralistic networks. Cost-sharing networks have users sharing equally (according to access and/or use) in the price of the full cost of the private network. Private data networks (used by firms for WANs) are examples of cost-sharing networks.

Redistributory networks have certain users paying parts of other users' costs. This occurs in heavily regulated public networks such as the PSTN where urban users subsidize rural users and businesses subsidize residential service. Redistributory networks also occur when one group of users wants network access so much that it is willing to subsidize other users' access. New users impose a burden on network resources. However, power users may know that prices will fall and their benefits will rise by adding the subsidized user group. This is the principle behind the federal e-rate universal access charge assessed by telcos for telephone service. Pluralistic networks have a mixture of public and private carriage, with the Internet cited as an example. However, "the very success of network expansion bears the seed of its own demise", creating a "tragedy of the common network" [Noam, 1992, p. 124].

The last set of properties is related to network layers and levels. Intellectual or high-level networks are typically intelligent networks that provide customizable service to users. In hypercommunications this often means content such as subscription websites, video on demand, and customized information. Low-level networks are common carrier or physical level links that are used to gain access to higher network levels. Network layers such as the OSI model are often virtual structures that allow higher layers to sit atop lower layers. The net result of layers and levels is "a recursive relationship in which the cost structure of services provided in any layer is determined by prices charged by providers one layer below" [Gong and Srinagesh, 1997, p. 68].

As particular network externalities are discussed, several points need to be remembered. First, network effects are counted in economics in two ways: as social costs and benefits or as pecuniary costs and benefits. Second, discrete choices (the inframarginal case) and continuous choices (the more familiar marginal case) are two different ways of analyzing network effects. Finally, network effects can be in occur in production (supply), in consumption (demand), or in both.

3.7.2 Positive Network Effects

The economic properties of networks combine to create much of what is called new economics. An important ingredient is the network externality, or more properly, according to Liebowitz and Margolis the "network effect". Liebowitz and Margolis reserve the term network externality

for a specific kind of network effect in which the equilibrium exhibits unexploited gains from trade regarding network participation. The advantage of this definition over other possible definitions is that it corresponds with the common understanding of externality as an instance of market failure. [Liebowitz and Margolis, 1994, p. 2]

Whether known as network effects or network externalities, this phenomenon is said to make supply curves slope down (in the case of network production externalities) and to cause the demand curve to slope up.

Thus, in the case of the paradigmatic network industry the market demand schedule slopes upwards (due to demand externalities) and the market supply schedule slopes downwards (due to indivisibilities and supply externalities), with the consequence that their point of intersection defines a 'critical mass' or 'threshold' scale for the activity's economic viability, rather than a stable equilibrium level of production." [David, 1992, p. 104]

However, it turns out that such externalities (if positive) do not invalidate the supply and demand curves of conventional economics. Instead, production and consumption externalities are easily analyzed within the conventional framework.

Networks exhibit positive consumption and production externalities. A positive consumption externality (or network externality) signifies the fact that the value of a unit of the good increases with the number of units sold. To economists, this fact seems quite counterintuitive, since they all know that, except for potatoes in Irish famines, market demand slopes downwards. Thus, the earlier statement, 'the value of a unit of a good increases with the number of units sold,' should be interpreted as 'the value of a unit of the good increases with the expected number of units to be sold.' Thus, the demand slopes downward but shifts upward with increases in the number of units expected to be sold. [Economides, 1996, p. 675]

Thus, the analysis of network effects depends critically upon expectations of producers and consumers. However, the unit of analysis can be hard to identify when network effects are considered, as Gong and Srinagesh mention.

Call externalities arise because every communication involves at least two parties, the originator(s), and the receiver(s). Benefits (possibly negative) are obtained by all participants in a call, but usually only one of the participants is billed for the call. A decision by one person to call another can generate an uncompensated benefit for the called party, creating a call externality. Network externalities arise because the private benefit to any one individual of joining a network, as measured by the value he places on communicating with others, is less than the social benefits of his joining the network, which would include the benefits to all other subscribers. Again, the subscription decision creates benefits that are not compensated through the market mechanism. [Gong and Srinagesh, 1997, p. 65]

The impact of positive externalities can be seen when considering that an agribusiness with an Internet account may save time by B2B cyber shopping, finding new customers, recruiting new employees, and experiencing many other benefits. Those benefits due to joining the network (network effects) may dramatically exceed all economic costs associated with becoming an Internet business and are further heightened as new customers and suppliers join the network. However, there may not be evidence of market failure, so such network effects are not properly called externalities.

There is plenty of evidence supporting the existence of positive externalities in hypercommunications. For example, consumption externalities abound in the telecommunications literature [Squire, 1973; Rohlfs, 1974; Artle and Averous, 1975; Littlechild, 1975; Oren and Smith, 1981]. Generally, positive network effects of joining a communications network depend on the net social or monetary benefit (or the net present value of the stream over time) that exceeds the full economic costs of joining. However, network effects can also be negative, either individually, or as a net effect.

3.7.3 Negative Network Effects

As Liebowitz and Margolis state, there is no

reason that a network externality should necessarily be limited to positive effects, although positive effects have been the main focus of the literature. If, for example, a telephone or computer network becomes overloaded, the effect on an individual subscriber will be negative. [Liebowitz and Margolis, 1994, p. 1]

In fact, many kinds of negative network effects are possible. Chief among these in hypercommunications is congestion. There are also call externalities, social-managerial effects, and the danger that a single defect can destroy or damage one or multiple connections.

Congestion can include connection establishment delay and connection establishment failure for connection-oriented services such as dial-up Internet access. Congestion can cause negative network effects in other ways such as noise on wireless telephone calls, slow-loading web pages, e-commerce sites that cannot handle order loads, etc.

Negative call externalities (negative call effects) result when being on a network creates undesired costs related to individual calls or messages. Telephones connected to the PSTN face the chance of receiving prank calls, undesired sales calls, or conversations that interrupt work or pleasure. E-mail subscribers get negative externalities when they receive spam or such a large volume of relatively unimportant messages that time is lost wading through mailboxes or important messages are missed altogether.

Social-managerial effects were covered (along with other negative externalities) in the discussion in 2.5 about "unlimited" communication. Essentially, these effects include economic costs that result from retraining workers, failing to predict which technology a firm should invest in, and other path dependency issues. For rural areas, there has been a great deal of discussion regarding worsening of the gap between information have's and have not's as technology advances. In spite of the promises of communications networks, a digital divide has been established between those who have the infrastructure and training to access them and those who do not. Some research suggests this "digital divide" is widening rapidly [NTIA, 1999, p. xiii].

The dangers of belonging to a network have already been discussed under security topics in 4.2.4 and 4.9.8. Negative hypercommunication network effects that can result include such things as cyber crime, viruses, crackers, identity theft, eavesdropping, and other threats to the wealth or well being of businesses and individuals. Many costs of remedying these problems by purchasing security equipment and software, hiring experts, and other protective measures are direct, but also involved are non accounting costs of security. All these are deducted from positive network effects.

Finally, there is an opportunity loss due to non-customization:

Information services are presently being offered as broadcast media, a misuse of their potential comparative advantage as shared network systems. The advantage of shared network systems is best utilized through developing services that are user-specific. [Steinmueller, 1992, p. 192]

If information that is obtained over hypercommunication networks tends to avoid using those networks to achieve maximum positive benefit, then there has been a welfare loss. The argument here does not concern benefits that are lost, but benefits that would have been greater to users if certain actions had been taken in content writing, design, and programming. Specifically, this is a direct reference to using the mass communication model instead of the hypercommunication model (see 2.2.2. and 2.2.3).

3.7.4 Direct and Indirect Sources of Network Externalities

Indirect network effects refer to the degree to which a technology's value depends on the set of complementary goods it has. Many argue that DVDs or CDs are examples. Indirect externalities are often classified pecuniary or technological. Pecuniary externalities are ones where "one individual's or firm's actions affect another only through effects on prices" while with technological externalities "the action of one individual or firm directly affects the utility or profit of another" [Greenwald and Stiglitz, 1986, p. 229].

Direct network effects are inframarginal, internalized through ownership, or internalized through transactions. Direct effects refer to the degree that a technology's value is based on its ability for users to interact with other users of the network. Importantly, as Liebowitz and Margolis (1994) point out:

The literature of conventional externality is largely concerned about the level of externality-bearing activities--too much pollution or congestion, too few Good Samaritans. The network-externality literature, on the other hand, is rarely concerned with determining optimal network size, but often concerned with the choice among possible networks, i.e. discrete choices. [Liebowitz and Margolis, 1994, p. 8]

However, the "representative network externality problem" consists of cases where solutions require individual and group actions and dependencies.

Some action would be socially wealth-increasing if enough people joined in, but each agent finds independent action is unattractive. The familiar tax-and-subsidy solution to externality problems (a solution based on altering marginal magnitudes), although suited to changing the scale of externality-generating activities, is not in general appropriate for discrete choices (inframarginal problems). Instead, the network effects diagnosed in this literature pose problems of transition, a problem of coordinating movement from one equilibrium to another. [Liebowitz and Margolis, 1994, p. 8]

However, the importance of discrete and group choices may be overstated in some analyses, especially in the context of the hypercommunications policy environment in Chapter 5 where the conventional marginal-orientation remains important.

Often overlooked is the distinction between inframarginal and marginal analyses of network effects that is especially important with lumpy path dependency problems. In such cases, solutions may differ depending on how the problem is phrased. In the most easily solved case, the objective is to find the optimum equilibrium given that a particular path has been chosen. In a frequently encountered case, the objective is to find the optimal path. A commonly used example is whether the QWERTY typewriter keyboard was in fact the most efficient way to arrange the keys. At this stage, even if QWERTY was found to be far less efficient than other keyboard configurations, it is chosen simply because of its gigantic installed base. There are simply so many keyboards and typists trained in the QWERTY method that adjustment costs swamp benefits of switching methods. Path dependencies include OS such as Windows, applications such as the MS Internet Explorer Browser, and DCE and DTE. Potentially, poor software or hardware development paths are the largest negative network effects.

3.7.5 Implications for Agribusiness from Network Economics

There are several implication from the network economics literature for agribusiness.

The first concerns equity and competitiveness. Specifically, small firms could be disadvantaged as Macdonald mentions:

Denied access to the major information networks in their industries, small firms are forced to incur the expense and confusion of tapping into a profusion of specialist networks, to be content with the service provided by the public network, or to continue to rely on older forms of information transfer. None of these courses is likely to give them greater competitive advantage. [Macdonald, 1992, p. 65]

Larger agribusinesses may be in a better position to benefit from a networked economy than their smaller competitors. For certain industries where numerous smaller firms predominate (such as production agriculture), the networked economy could have profound implications on structure because new, larger organizational forms could predominate forcing lower technology operations out of business.

However, such new organizational forms may have positive, negative, or mixed effects on small agribusinesses or producers. New technologies and better communication are spawning many joint ventures, coalitions, and licensing arrangements so that allied firms may cooperate in new ways both horizontally and vertically. Baarda's idea of the transgenic firm is such an example [Baarda, 1999]. The idea of the transgenic firm arose in biotechnology, where developers of new transgenetic plants and animals sought a new vertical legal arrangement to protect their intellectual property from horizontal spread to non-paying customers. Previously seed manufacturers or breeding operations may have been willing to sell farmers or ranchers seed or semen without restriction. Now, they are offering restricted use of patented life forms they do not own for a single planting or breeding cycle. The owner of the patent required a new legal form of control to prevent unauthorized replication of patented material.

In the same way, information providers try to protect their intellectual property from unauthorized duplication and distribution to parties who have not paid for it. Obviously, information cannot be bred to produce future generations of information the way that a genetically engineered plant or animal can. However, producers of weather reports, farm advice, and marketing research have a similar desire to avoid unauthorized replication of their intellectual property in the very short run, rather than the short run or long run. Ciborra argues this is no surprise in either case since coalitions "will be more frequent during periods of rapid and significant structural change in an industry." [Ciborra, 1992, p. 93] Indeed, the Internet, e-commerce, and hypercommunications in general have led to the establishment of new kinds of business arrangements such as affiliates, strategic partnerships, and other co-operative forms of governance that fall far short of mergers and acquisitions.

In general, agribusinesses will use hypercommunications networks when it is worthwhile to do so. Lawton (1997) provides six guidelines as to when a business will adopt or join a network. The first guideline is: "A network will only be used by a firm if it is the least costly alternative for the delivery of a particular service or set of telecommunications services" [Lawton, 1997, p. 139]. The truth of this statement seems tautological until it is realized that it applies to a full information case given a competitive market.

It can be easily asserted that, like other non-information sector firms, most agribusinesses (especially firms with fewer than fifty employees) may not know many technical characteristics regarding network services and technologies. It may be true that an IP CTI VPN (advanced converged network discussed in 4.5.4, 4.7.2, and 4.9.7) could replace the office telephone system for a much lower cost with tremendous positive network effects. However, the technical details necessary to understanding all that entails (and especially how reliable the new network will be) requires enormous effort to absorb.

The second guideline to economically predict the adoption of communication networking technology is: "A firm will build, rent, or otherwise obtain its own facilities-based network when to do so is less costly than the use of existing commercially available networks" [Lawton, 1997, p. 139]. Again, the agribusiness may be left at a disadvantage because of knowledge. However, some large and small agribusinesses have already created their own hybrid networks such as on-premises unlicensed spectrum wireless or other informal solutions. Most firms have neither the financial nor technical resources needed to create their own networks, even though this is technologically possible in a virtual sense.

The third guideline states that "A network is the least cost alternative for the delivery of certain telecommunications alternatives" [Lawton, 1997, p. 139]. This guideline summarizes one of the central lessons of Chapter 3. Beginning with line consolidation and hierarchical routing and leading up to packet-switching, multiplexing, and virtual circuits, the chapter has traced how networking technology has been able to cram more information onto less wiring over greater distances at lower costs. On the surface, the idea seems counter-intuitive for agribusinesses as well as economists. However, the more educated an agribusiness becomes, the greater the chance that it will realize that if some networking is good, more networking can be better, etc.

The fourth guideline reads "A point-to-point network or sub-network is the least cost alternative for the non-ubiquitous delivery of certain telecommunications services." [Lawton, 1997, p. 140] A business may be able to save money and gain security by shrinking the connections made to some machines, databases, or stations. Some kinds of sensitive, highly confidential traffic for businesses such as payroll data, customer data, and orders do not gain anything by being transported over inexpensive multi-level inter-networks. While connection charges can be lowered, there are cases where the most expensive connection will be the least expensive in terms of overall economic costs when risk is considered.

The fifth guideline is "All services use the network in order to obtain the network surplus" [Lawton, 1997, p. 141]. The theoretical truth of this statement is masked by lack of information, unequal infrastructure development, and the fact in the marketplace (not counting certain rural areas) multiple carriers peddle multiple services on multiple networks.

The full information argument and competitive marketplace could also be analyzed from the hypercommunication vendor's point-of-view. It is to the carrier's benefit to see that customers are confused enough by the details of communications networks that they hire the carrier as their agent and not just as a service provider. That way, separate bills for separate networks for separate services will continue to be the rule. Even as carriers see that it is cheaper for them to merge some networks, bills, and services many carrier personnel are confused by the array of technical details and rare situations that arise in telephony, data networking, and Internet.

The sixth and last guideline that explains the economic conditions that favor businesses network adoption is that "A network is integrated and indivisible" [Lawton, 1997, p. 143]. On this guideline, the agribusiness may gain caution. While many of the other guidelines can be interpreted almost as positive commands for agribusiness managers to join new kinds or more powerful networks, beneath the positive integration and indivisibility of this guideline lurks genuine danger. Specifically, there are concerns about security, reliability, and dependency. These are addressed throughout Chapter 4.

3.8 Summary

Chapter 3 has discussed why hypercommunications have originated by tracing the technical properties of communication networks. As the PSTN and private data networks developed, the engineer's view of networks as an efficient service delivery mechanism serving passive subscribers was emphasized. However, the combination of OR's technical capacity to optimize converged networks and the reasons why hypercommunications originated from the information economy (covered in Chapter 2) created a new network economics. Instead of replacing conventional economics or repudiating the engineering economics, the new economics that was born from the technical realities of network synthesizes past work in technical and economic efficiency with network effects.

While there are many conceptions of networks, the converged hypercommunication network of the near future will rely on interconnected digital transmission of data and information over an inter-network of voice and data sub-networks. Economic implications of how hypercommunications work rely on larger conceptions including fuzzy inter-personal networks and broader macro networks.

Table 3-6 summarizes the results of Chapter 3 by comparing the economic characteristics of several kinds of networks. The telephone access level is not actually part of a network as much as it is a connecting system. Network access remains an important concept when Chapter 4 begins the process of identifying what hypercommunication services and technologies are. As network development progressed, the access loop and even the devices used to communicate over networks became part of the network. However in most cases, access level connections are still paid for separately from other network services. Access connections remain major barriers to high-speed networking, especially in rural areas.

Table 3-6: Comparison of economic network characteristics
Network System ownership Network ownership Connecting nodes
Telephone access NA ILEC owns access loop. Loops are wholesaled to ALECs or IXCs. Retailed to firms Subscriber CPE connects to ILEC, ALEC, or IXC CO
Telephone transport Formerly AT&T, now multiple carriers Inter-network of ILECs, IXCs, ALECs Switches and conduit owned or leased by telco
Centralized computer NA Firm unless time-sharing NA
LAN NA Firm Hubs, other DCE owned by firm
Client-server WAN Telco or ISP Firm owns local portion, leases dedicated access from telco Routers or other CPE DCE
Distributed network Telco or ISP Firm owns local portion, some remote nodes, leases access loop and connection circuit separately Servers, routers, switches, other DCE
Inter-network computer System of telcos, NSPs & ISPs Only local portion owned by firm, access and connection leased separately Servers, switches, gateways, other CPE

The telephone transport network has evolved from a hierarchical system developed by AT&T into interlocking cost-sharing and redistributory networks of separate long-distance and local carriers. As computer networks, digitization, and de-regulation occurred, the transport network became parallel networks of different local and long-distance carriers. Now with the advent of IP transport, the telephone transport network is becoming a pluralistic inter-network that is shared, but not entirely owned by any party. Many carriers do not even have transport facilities.

Several economic generations of computer networks were covered in Chapter 3. These moved from ownable centralized computer and LAN systems to virtual inter-networks that used pluralistic peering arrangements to move traffic over long distances for insignificant unit costs. As Chapter 4 will show, hypercommunication services and technologies rely more on the distributed or inter-network forms than their predecessors. However, ownable CPE networks (such as LANs and telephone systems) are able to interconnect and gain full network benefits.