Services and Technologies
Historically, the telephone network carried public voice communications, private corporate networks delivered data, and broadcast networks delivered video. Each of these services was coupled with a specific form of infrastructure, such as copper pairs for telephone or coaxial for cable TV. Digitalization of voice, data, and video information . . . has allowed traditional boundaries to be crossed relative to services being provided. In other words, a single facility carries voice, data, and video. . . . With the digitization of information, the network needed for transport requires only digital transmission capabilities. Legacy networks designed for specific technologies are in the process of transformation to allow provision of all types of services. [Weinhaus, Stevens, Makeeff et al., 1998, p. 26]
The objective of Chapter 4 is to show what hypercommunication technologies and services are. However, distinguishing hypercommunication services from hypercommunication technologies can be a difficult task. As Weinhaus, Stevens, Makeeff et al. noted above, separate legacy (existing) communications networks are evolving into unified digital networks capable of transmitting all kinds of hypercommunication services, a process known as convergence. Convergence complicates the task of differentiating hypercommunication services from the network and delivery technologies that provide them.
Charles Sirois explains three trends that are altering traditional distinctions between services and technologies:
First, there is a deregulation trend. . . . The second trend is the feasibility of splitting telecommunications infrastructure from services. Where this occurs, it means that for every dollar spent on telecommunications services, the infrastructure portion will be smaller. In the past, the infrastructure was the service when you heard the dial tone after picking up the telephone. What . . . telcos provided was mainly a pair of copper wires and a number of switches. The infrastructure was the service. The telcos were only in the carriage business. They could not be in the content or message business.
Today it is possible to be a telecommunications service provider without owning one inch of fibre optic cable or even a switch. The necessary hardware can be leased from facilities-based operators like the telcos. So more and more you can have a split between infrastructure and services.
These two trends lead to a third one: fragmentation of the offerings to the end-user. We are not talking here of oligopoly. Rather, in the future there will be hundreds or even thousands of providers of telecommunications services (i.e. content). It will be a world of specialists, focusing on hundreds of niche markets. This is the world of narrow-casting . . . and the many service providers will have a choice of methods of transmission to reach their niches. [Sirois, 1996, pp. 198-199]
These three trends (deregulation, the service-technology distinction, and fragmentation) will become increasingly important to how agribusinesses buy hypercommunications. In this Chapter, the service-technology distinction and the fragmentation of offerings to business customers are covered. Regulatory issues (especially important to agriculture and rural areas) will be considered in Chapter 5.
Agribusinesses buy hypercommunication services from hypercommunication suppliers. However, if Sirois is right, in spite of convergence, "fragmented" services and prices (which vary by network technologies and content) will confuse agribusinesses. Furthermore, carriers use many different hypercommunication technologies to reach the locations agribusinesses need for communication. These technologies include transmission technologies, infrastructure technologies, interconnection facilitating technologies, or voice-data consolidation technologies. Each technology, in turn, may be composed of hardware, software, conduit, and protocols.
Specific technologies may seem unimportant to agribusiness managers when they need no technical knowledge of the underlying network in order to communicate. However, the strengths and weaknesses of particular technologies become important if there are frequent interruptions (downtime), if costs skyrocket, or if rural areas will not be served. Additionally, agribusinesses can choose technologies tailor made to specific strategies. Finally, agribusinesses also rely on capital purchases of hypercommunication CPE (Customer Premise Equipment). CPE technologies include computers, faxes, telephone systems, and other devices. CPE must be compatible with the existing equipment and software of both the agribusiness and its hypercommunication vendors. Additionally, an agribusiness may need hypercommunication services such as programming, web design, Internet promotion, repair, technical support, or network planning.
Basic knowledge of what hypercommunication services and technologies are could help an agribusiness manager simultaneously save money, improve communications with existing customers, plan future needs, and reach new customers. This Chapter provides descriptions of the major hypercommunication services and technologies that are available (or shortly will be) to Florida agribusinesses.
Organization of Chapter 4 is straightforward. Section 4.1 covers ways that hypercommunication services and technologies are converging into a single market. Section 4.2 discusses QOS (Quality of Service) and the many popular definitions of bandwidth. Bandwidth, data rate, throughput, and delay are often lumped together under a popular definition of bandwidth even though they are examples of individual QOS metrics that objectively appraise separate parts of hypercommunications quality and quantity. The next sections differentiate hypercommunication transmission technologies between wireline (4.3) and wireless (4.4). Section (4.5) touches on hypercommunication support services, facilitation, and consolidation (convergence-enabling) technologies. Then, hypercommunication services and technologies related to specific services are divided into four sub-markets: traditional telephony (4.6), enhanced telecommunications (4.7), private data networking (4.8), and Internet (4.9). The Chapter concludes with a summary comparing the usefulness of transmission technologies and service groups to agribusiness (4.10).
Chapter 4 uses technical sources (articles, texts, white papers, and standards) to provide overviews of the three global technological areas and four specific service sub-markets. Further details came from trade publications and informal discussions with industry sources. Where possible, deployment of services was examined firsthand in businesses around Florida. History provides the basis for constructing the three global technology sections and four specific service sub-markets. Now, each sub-market represents a share of the converging hypercommunications market.
There is a danger to approaching hypercommunications through historically defined sub-markets because that view emphasizes regulated monopolies such as telephone ILECs and cable TV providers. Both federal and state regulatory environments have changed dramatically in the past five years. Thinking about separate services and technologies (using either the interpersonal or mass communication model alone) does not apply now as it did when the 1934 Communications Act (the organic legislation to the 1996 TCA) became law. In the year 2000, the hypercommunications infrastructure consists of the PSTN, the Internet backbone, private data networks, along with "dark fiber" and other landline infrastructure (such as cable television systems). Additionally, terrestrial and satellite wireless networks form part of the hypercommunications infrastructure.
4.1 Hypercommunications Convergence
Although the hypercommunication model is replacing the formerly separated interpersonal and mass communication model with a myriad of interconnected choices, convergence is not yet a reality. Hence, while there is some danger in using sub-markets to characterize what hypercommunication services and technologies are, Chapter 4 is best organized around the current structure of hypercommunications. Convergence is the process that will modify today's sub-markets into tomorrow's converged marketplace. It is therefore appropriate to begin by naming dimensions of convergence that are affecting the technologies and sub-markets that form the basis for the rest of Chapter 4.
Virtually every segment of the world's economy (including agriculture) is affected by the convergence of basic telephony, enhanced telecommunications, the Internet, and private networking into hypercommunications. While the inevitability of convergence is taken for granted, the rate of convergence and the form it will take cannot be. Convergence would happen automatically except for institutional and attitudinal barriers including regulation, competition, speed of diffusion, and competing standards. Converge means "to move, turn, or be directed toward each other or toward the same place" [Webster's New World Dictionary, 1960, p. 323].
In a rapidly expanding marketplace with frequent introductions of new services and technologies, convergence means that further horizontal and vertical integration of the hypercommunication market will occur with profound implications for individual agribusinesses. Convergence has already been described as a process where two separate communications models (interpersonal and mass) and their separate telecommunications networks evolve into a single hypercommunications model that uses a mesh of networks. This view matches Alan Stone's definition of "boundary problems" due to the clash of old and new technologies. According to Stone, the first boundary problem was telephone-radio, followed by telephone-computer, TV-radio, etc [Stone, 1997]. Instead of entirely replacing the old technology such clashes can lead to new uses for the old technology. For example, radio had been expected to die because of television and the demise of movie theatres was expected because of VCRs. In both cases, the market was able to absorb the new technology without eliminating the old.
However, because of the diversity of hypercommunication services and technologies it is difficult to speak of boundaries in the same breath as convergence. Instead of discrete boundaries, convergence has analog dimensions. For agribusiness, hypercommunications convergence has five similar dimensions, each based on a clash between old and new. Each dimension is important to agribusiness because the result of the clash will determine capital costs and variable expenses.
The first dimension of convergence is device-device convergence, the convergence of previously differentiable electronic hardware devices into multi-purpose units. The telephone, fax, television, radio, and computer are beginning to converge together into DTE (Data Terminal Equipment), or multi-functional user devices. As one newspaper article put it:
Convergence is the coming together of computer, broadcast, telecommunication and entertainment technologies. It fulfills the ultimate promise of the information revolution--the ability to receive [and process, store, manipulate] any information anywhere with a single [intelligent] terminal. [The Globe and Mail (Toronto, Ontario), June 26, 1994, p. B4]
Already, employees of an agribusiness may use several separate communications devices (telephone set, fax machine, desktop computer) or sit at a CTI (Computer Telephone Integration) station with fax, e-mail, computing, Internet, video, and voice capabilities. George Gilder argues that convergence is being driven by "the on rush of computer technology invading and conquering" the traditional distinct domains of television, films, consumer electronics, telecommunications, publishing and games [Globerman, Janisch, and Stanbury, 1996, p. 212].
The main question is whether the computer will completely replace the office telephone, fax machine, and copier. Device-device convergence is becoming increasingly important as agribusinesses equip offices and train employees. While there can be substantial savings to capital budgets from purchasing one device to do the job several previously did, a dangerous dependency on a single technology or connection can occur unless there are redundancies. For example (unlike a computer terminal), individual telephone sets do not become useless every time a computer spreadsheet crashes, nor do telephones suffer downtime during every LAN outage.
A second dimension of convergence is content-carrier convergence. Content-carrier convergence is also called confluence:
What is widely called convergence (but should more properly be called confluence) has resulted in the blurring of the traditional distinctions between telecommunications and broadcasting, and between content and carriage. [Globerman, Janisch, and Stanbury, 1996, pp. 212-213]
MS-NBC is one example. A broadcast carrier (NBC) and an ISP-software giant (MSN, MicroSoft WebTV) combined through a "strategic alliance" to create editorial content, advertising tie-ins, and common e-commerce opportunities designed to meld formerly diverse content services into commonly controlled programming and distribution. The Time Warner-Turner Broadcasting-AOL-Netscape mega-merger is another example. When a hypercommunications carrier creates content under content-carrier convergence, it can prevent or outmaneuver other firms from offering content or access. One example of how this works is through web portals. A web browser can be programmed to go to a particular home page (portal) upon session initialization. Wireless devices are factory programmed to exclusively access portals where the carrier has control over content. If carriers obtain enough market power to lock out competing content providers, consumers have fewer choices and agribusinesses can become dependent on a single source for news, information, and communications access. For some rural areas, the situation is already familiar. In many local areas of Florida, AOL is the only possible way to obtain Internet access, though this is less true than it was.
The third dimension of convergence, carrier-device convergence occurs when the hypercommunications carrier and the communications device used to communicate become the same. An extreme example of carrier-device convergence was used by pre-breakup AT&T. Telephone customers had to lease telephones from AT&T that were manufactured by AT&T's Western Electric subsidiary. A recent consumer action against AOL is another example. The plaintiffs charge that once Internet Explorer for AOL was installed, it became impossible for the computer to be used to access a competing OSP or ISP if the consumer switches providers. The inter-relatedness of hypercommunications technologies can mean that a service provider will be unwilling or unable to provide services if an agribusiness already owns a certain make of equipment or is located in a particular area. Substantial switching costs to change providers can occur in this way.
In the fourth dimension, regulatory convergence, as separately regulated industries come together, so will taxes, regulations, and governmental policy. Quoting George Gilder again, Globerman, Janisch, and Stanbury state what is behind regulatory convergence:
Also, 'convergence assaults century-old regulatory rules that have kept telecommunication and broadcasting in separate legal solitudes. The old distinctions were based on the types of wire, of radio signals, of information and of companies. These barriers no longer make sense.' In fact, convergence would not be a public policy issue if it did not cause conflict among previously separate regulatory regimes, telecommunications, and broadcasting. [Globerman, Janisch, and Stanbury, 1996, p. 213]
Because of delays introduced through the lobbying, legislative, regulatory, and legal processes, market adjustments typically occur so quickly that regulatory action can be superfluous. However, regulation may inhibit convergence from occurring or prevent service providers from entering rural areas until regulatory issues are resolved. Other areas of regulatory convergence such as taxation are covered in Chapter 5.
The regulatory task becomes more difficult as new services and technologies, (especially the unregulated Internet) create new regulatory territory.
Convergence also includes the ability to combine several technologies to produce new modes of communication. For example, the Internet or (network of networks) combines computers (largely PCs), modems, specialized software, existing telephone lines, packet switching, and a universal transfer protocol (TCP/IP). The result has been a very rapidly growing mode of communications, which in the past two or three years began to evolve from being text-based to sound (including a crude form of telephony) and video. When broadband capacity cable or wireless replaces the twisted pair of copper wires over the last mile, the Internet may be the epitome of convergence. [Globerman, Janisch, and Stanbury, 1996, p. 212]
The fifth dimension of convergence is competitive convergence (market convergence). This dimension is defined through market structure, conduct, and performance. When firms merge or technologies converge the number of firms is reduced. Hence, convergence
refers to any break-down of previous technological barriers among computer software, telephone, cable and entertainment industries. Those barriers arose from the limitations inherent in pre-computerized analog signaling. Convergence is a technological phenomenon, but its wider consequences are being felt by individual businesses, by regulators, and by consumers. [Globerman, Janisch, and Stanbury, 1996, p. 212]
There is an inherent tension between the rates of growth of pro-competitive influences and anti-competitive influences. Pro-competitive influences (deregulation, elimination of regulatory monopolies, uniform taxation across sub-industries, IPOs, spin-offs, interconnection, etc.) stand opposed to anti-competitive influences (technical convergence, mergers, re-regulation, and non-uniform taxation across sub-industries).
The changes convergence is bringing promise to be truly revolutionary according to Gilder:
'The computer industry is converging with the television industry in the same sense that the automobile converged with the horse, the TV converged with the nickelodeon, the word processing program converged with the typewriter, . . . and digital desktop publishing converged with the linotype machine and the letterpress.' [Quoted by Globerman, Janisch, and Stanbury, 1996, p. 212]
However, there is a difference of opinion of how soon the convergence revolution will occur as shown by two Internet Week headlines: "The One Pipe Approach Gains Momentum" and "Convergence Reality Check: Voice Data Unity Still a Pipe Dream". In the 1999 convergence reality check article, an Internet Week survey found that only eleven percent of IT managers surveyed already had converged networks while sixteen percent planned to unify their voice-data networks within one year. However, fifty-one percent had no convergence plans or were planning to wait at least three years.
An important barrier to convergence is the lack of SLAs (Service Level Agreements) or signed contracts between buyer and seller setting out QOS guarantees. "The realization of a multiservice utopia will not happen until service guarantees become a fundamental part of the convergence landscape" [Morency, 1998, p. 23]. The increasing importance of hypercommunication SLAs means agribusinesses need to be intimately familiar with bandwidth and other QOS metrics that establish SLA terms. Bandwidth and QOS metrics are the weights and measures of hypercommunications, but are hardly as standardized as those agribusinesses are used to in other markets.
4.2 Bandwidth and QOS (Quality of Service)
No hypercommunications term is subject to more confusion or used more frequently than bandwidth. While an understanding of bandwidth is important to agribusinesses hoping to use hypercommunications services and technologies, bandwidth is not a complete description of the speed, overall quality, or value of a particular service or technology. Bandwidth-related measures are most often used to define and price hypercommunications services and technologies. However, excess reliance on one term obscures a more comprehensive set of characteristics called QOS (Quality of Service) metrics that are more important to agribusiness hypercommunication strategies. This section is designed to be a practical guide for agribusinesses hoping to understand bandwidth and other QOS so as to become more informed hypercommunication buyers.
This section and the following three (4.3 through 4.5) apply generally across the specific service and technology sub-markets discussed the last half of the Chapter from 4.6 through 4.9. To provide the right technical foundation, large amounts of supporting technical details are presented here in section 4.2. Readers who are already familiar with signal conversion, how modems work, and the subtleties of bandwidth, throughput, and data rate may go directly to (4.2.3). There, a general QOS model is introduced to provide an analytical framework of agribusiness hypercommunication networks. The reference model integrates QOS metrics and bandwidth with material from Chapter 3 such as the three core engineering problems of communication networks and the four technical objectives of network managers.
Other readers may need to take advantage of all four sub-sections in 4.2, being aware that tables 4-1 through 4-4 and figures 4-1 through 4-22 trace the main points, with the text available for additional support. In the first sub-section (4.2.1), bandwidth and five other QOS characteristics most often confused with bandwidth are discussed. The critical distinction between signals and messages and the difference between operational speed and capacity are explained. Then, a practical example of computer modem communication (4.2.2) underscores bandwidth's separateness from other QOS metrics. The last section (4.2.4) presents agribusinesses with a thumbnail sketch of the QOS dimensions used in buying and selling hypercommunication services and technologies.
4.2.1 The Relationship between Bandwidth and Speed
For several reasons, bandwidth has taken on an imprecise meaning that goes beyond its specific technical definition as a capacity measure (as introduced in Chapter 3). One reason for this is that the technical nature of hypercommunications tends to be confusing. Technical terms such as bandwidth become confused with closely related technical terms. A second reason for the overuse of bandwidth is that bandwidth is the unit most often used to price hypercommunications services and to describe infrastructure, networks, and individual connections. This multiplicity of uses causes instant misunderstanding since bandwidth is variously used as a stock measure, a flow measure, an accounting cost, and a capacity constraint. In short, bandwidth is used popularly to compare the speed, capacity, reliability, and quality of hypercommunication services, carriers, and technologies.
Furthermore, there is a lack of uniformity in "expert" opinion about what bandwidth is. As more sources are consulted, more variations in definition are encountered. For example, a well-received book aimed at business MIS and telecommunications managers defines bandwidth as "the speed with which data travels, measured in bits per second (bps)" [Bezar, 1995, p. 75]. However, in Bezar's glossary bandwidth becomes:
A term defining the information carrying capacity of a channel--its throughput. In analog systems, it is the difference between the highest frequency that a channel can carry minus the lowest, measured in hertz. In digital systems, the unit of measure of bandwidth is bits per second (bps). The bandwidth determines the rate at which information can be sent through a channel--the greater the bandwidth, the more information that can be sent in a given amount of time. [Bezar, 1995, p. 421]
The distinction between digital and analog discussed in 3.2.1 helps explain the multiple meanings of bandwidth.
However, the meaning of bandwidth also depends on the relationship among bandwidth, bits, and speed, a complex recipe with three main ingredients. First, a hypercommunications message (voice, data, or video) and the signal that carries it are different entities. Signals can be analog or digital while message content also can be analog or digital. Second, signal transmission may use broadband technology, carrierband technology, or baseband technology. Third, there can be a difference between the upstream and downstream rate and capacity (directional asymmetries) for a variety of scientific and technical reasons. These ingredients need to be examined before QOS can be understood.
The first ingredient is the distinction between signal domain and message content. The signal-message distinction is rooted in Shannon's mathematical theory of communication, shown in Figure 4-1 [Shannon, 1948; Shannon and Weaver, 1949].
The information content of a message and the domain of the signal are two distinct entities. Messages may have either analog or digital information content (source domain) depending on the equipment (DTE) at the information source and destination. Information sources "represented by a physical quantity that is considered to be continuously variable and has a magnitude directly proportional to the data" are analog [GSA, FED-STD-1037C, p. 1996, A-14]. Analog message content refers mainly to voice telephone calls, though it can include audio, graphics, video, and readings from scientific sensors (such as pressure, temperature, and position). Digital information sources are "represented by discrete values or conditions (or) are discrete representations of quantized values of variables, e.g., the representation of numbers by digits perhaps with special characters and the 'space' character" [GSA, FED-STD-1037C, 1996, p. D-18]. Digital message content includes almost all data communications and Internet traffic. The advent of digital cameras, digital image scanners, CDs, etc. has created digital replacements for previously analog sources.
The signal is an electric current or electromagnetic wave used to carry an encoded representation of the message from the transmitter to the receiver. Signals may be digital or analog depending on the transmitting and the receiving equipment DCE (Data Communications Equipment) on each end as shown in Figure 4-1. According to Hill Associates, "Signaling is analog if the signal transmitted can take on any value in a continuum . . . Signaling is digital if the signal transmitted can take on only discrete states" [Hill Associates, 1998, p. 303.1.3, italics theirs]. Switches, routers, and other kinds of intermediate DCE must be in the same domain as the signal or signal conversions must occur. If necessary, signals are decoded into the appropriate source domain before reaching their destination.
Digital signals offer many advantages over analog ones. For example, many separate digital signals may be interleaved and sent together to permit several separate conversations on a single line (multiplexing). Digital messages can be encrypted before their transmission as signals to prevent eavesdropping and to provide security. Often, severely degraded digital signals may be reconstructed, providing perfect copies of the original source. However, excess interference can prevent an entire digital signal from reaching a source, while distorted analog signals would be garbled but received under similar conditions. Digital signals require more bandwidth than analog signals do, but they are still less costly to transmit [FitzGerald and Dennis, 1999].
Table 4-1 shows the four combinations of digital or analog information content and analog or digital signals. Since each combination has different physical characteristics, each kind requires specialized hardware. These combinations also apply to signal-signal conversions and intermediate transformations that can occur in the transport level of a communication network, especially over long distances. Table 4-1 and Figure 4-1 show that Shannon's (1948) "information source" and "destination" are now known by the more general term, DTE. In a purely discrete system DTE are digital. In a purely continuous system DTE are analog. In a mixed system, DTE can be digital or analog.
Common digital DTE devices include computers and certain digital telephones. These devices send and receive digital messages whether text files, e-mails, voice mail, or graphics and video. Common analog DTE devices include most telephones, microphones, speakers, and certain scientific instruments.
The receivers, transmitters, and switches of Shannon's communications system also are generalized by the term DCE (Data Communications Equipment or Data Circuit-terminating Equipment). DCE are specific to the signal domain (analog or digital) while DTE depend on the message domain as well. The two need not be separate devices from the user's point-of-view.
DCE transmit and receive each end of a hypercommunication (and often in between) in the appropriate signal domains. A common DCE example is a computer modem. A modem is a transmitter that modulates digital content into an analog signal on one end and a receiver that demodulates analog signals back into digital form to reach the destination. Similar DCE devices exist for other signal domain, source domain combinations. For example, codecs (coder-decoders) are circuits (or software) serving as built-in DCE used to make conversions within digital DTE.
Therefore, in some cases such as telephones (both analog and digital), DTE and DCE are in the same device. For example, an analog telephone contains a microphone to capture the analog voice source that is then converted by a transducer into electronic waves that are sent as analog signals.
Figure 4-2 (direction of communication read from left to right) depicts signal conversion or transformation for the four cases alluded to in Table 4-1. In a purely continuous system (upper right, Figure 4-2), analog messages are modulated onto analog signals (continuously varying representations of the source message) over a carrier wave as in the cases of POTS and traditional broadcast radio and TV. A carrier is signal with known characteristics that is modulated to carry information. Using a carrier, the receiver can extract the message because it knows the characteristics of the carrier wave. However, noise or unintended changes to the carrier will be interpreted as part of the information. The bandwidth (transmission capacity) of channels over which analog signals pass is the difference in Hertz (cycles per second) between the minimum and maximum frequencies.
A purely discrete system (shown in the lower right of Figure 4-2) features a digital source carried by a digital signal (coded word representation of the source message) as in the case of Ethernet, ISDN, and DSL. With digital signal transmission, bandwidth is expressed in a bit rate (bits per second, bps). Bandwidth is expressed in Hertz (Hz, cycles per second) for analog-analog transformations.
However, the best unit to express bandwidth in is often confused between bps and Hz for the other two cases (mixed systems) when conversion (rather than transformation) occurs. In DAC (Digital source to Analog signal Conversion), shown on the lower left of Figure 4-2, the digital domain of the message is mapped onto an analog signal domain in order to reach the destination. In ADC (Analog source to Digital signal Conversion), shown in the upper left of Figure 4-2, the analog domain of the message must be mapped onto a digital signal domain. In such mixed systems, both Hz and bps may be used to describe bandwidth, depending on the context.
In ADC, analog messages may be converted into digital signals through codecs (coder-decoders) to become signal pulses. Typically, these pulses are modulated over a carrier pulse as in the case of digital wireless mobile telephone. Computer modems and cable modems perform both ADC and DAC. A modem modulates digital data (messages) into analog signals when it sends and demodulates analog signals to digital data when it receives. There can be several such signal conversions at intermediate DCE between the sender and receiver.
Analog signal modulation refers to alterations made by DCE in the characteristics of analog carrier waves, impressed on the amplitude (signal strength), phase (wave phase) and/or the base frequency of the wave. Analog signals (continuous waves) may be modulated (coded) in several methods including AM (Amplitude Modulation), FM (Frequency Modulation), PM (Phase Modulation), and QAM (Quadrature Amplitude Modulation).
Digital signals (discrete pulses) are modulated (encoded) through various kinds of PCM (Pulse Code Modulation) such as PAM (Pulse Amplitude Modulation), PDM (Pulse Duration Modulation), and Pulse Position Modulation (PPM). Digital signals are aperiodic (non-repeating patterns) so frequency cannot be used to describe them. However, a digital signal can be approximated by an n-th order series of harmonic sine waves. Each harmonic has its own amplitude, phase, and frequency. The minimum significant spectrum is the minimum frequency range (frequency spectrum) needed to represent the original signal and n-th order harmonics. The bandwidth of a signal is the width of the frequency spectrum it occupies.
When expressed in Hertz, the bandwidth of an analog medium for a digital signal provides a range of frequencies that can be transmitted. If this range is smaller than the minimum significant spectrum, more bandwidth is needed to allow the receiver to reproduce the original source signal.
When the end-to-end observed speed of communication between sender and destination DTE is considered, other characteristics beyond the bandwidth (capacity) of a single link or the bit rate of a given DCE are involved. These characteristics (summarized in Table 4-2) include throughput, data rate, delay, and jitter. Importantly, the focus of the first four characteristics in the table (each of which is often misleadingly called bandwidth) is primarily on digital bits.
When expressed in Hertz, bandwidth represents the raw capacity of a noiseless, analog channel before digital data are imposed. When expressed in bps, bandwidth represents the maximum attainable bit rate in a single direction of a noiseless link with no other traffic. The bandwidth or more precisely bit rate associated with a given bandwidth in Hz depends on the baud rate and coding rate. A sampling or baud rate is directly proportional to the bandwidth. The coding rate (number of levels in the code) multiplied by the sampling (baud) rate equals the bit rate, the rate at which DCE are designed to operate. Environmental noise can decrease the coding rate so that the difference between bit rate and bandwidth becomes larger. Noise and quantizing error can further in increase errors so that the gap between data rate and bit rate becomes larger.
The bit rate (operational speed of DCE in bits per second) that can be accommodated by any medium is proportional to bandwidth. Therefore, the greater the bit rate, the larger the bandwidth (capacity of a connection) needs to be. Thus, bandwidth typically exceeds the bit rate, which in turn exceeds the data rate.
The third characteristic, data rate (also known as the data signaling speed) measures "the aggregate rate at which data pass a point in the transmission path", typically expressed in bits per second (bps) [GSA, FED-STD-1037C, 1996, p. D-4]. Since the data rate is an error-free rate from DCE to DCE, it may be less than the bit rate and always below the bandwidth.
The next characteristic, throughput, can be expressed in two ways. Throughput is an end-to-end (DTE to DTE) measure that can be expressed as "pure throughput", a theoretical capacity (maximum attainable bps), or as an observed "throughput rate" in bps at time t, where bpst is an actual observation [Sheldon, 1998, p. 972]. Both pure throughput and the throughput rate (also called effective throughput) differ from bandwidth because they are end-to-end measures of the rate at which user information (not counting retransmission of errors or transmission of overhead) is processed and transmitted. Pure throughput is similar to bandwidth in that it is a capacity, but it is an end-to-end capacity, rather than the capacity of a point-to-point access line or transport link. For example, the compression of voice or data files improves throughput but does not change bandwidth, bit rate, or the next characteristic, data rate.
Throughput, because it is an end-to-end measure that includes compression but excludes error and overhead, cannot be compared directly to bandwidth, data rate, or bit rate. However, due to compression and in spite of overhead, an observed throughput rate (sometimes called information rate) often is greater than the data rate, bit rate, or even the bandwidth of a particular network segment between the sender and destination. Throughput is the speed measure experienced most by users.
Another QOS characteristic often erroneously confused with bandwidth, delay, is included in throughput. Delay (also called latency) is the actual length of time it takes a bit to travel across a transmission line [Sheldon, 1998, p. 256]. There are three kinds of delay: propagation delay, switching delay, and queuing delay. Propagation delay results from transmission media variables. For example, copper wire has a higher propagation delay than fiber optic cable, while satellite transmissions have higher propagation delays than line-of-sight wireless transmissions do. Propagation and switching delay do not depend on usage levels. However, queuing delay is zero at low use and rises due to congestion bottlenecks that result from high network loads (a systemwide ratio of effective throughput to capacity). Taken together, switching and queuing delay are often called throughput delay since they are variable. The number of hops a signal is switched over a network and the overall network load tend to vary from moment to moment.
The variation in delay, jitter is another way to indicate quality. Jitter refers to the variability of delay measures through time. Low jitter and delay are critical to voice conversations and real-time broadcasting. The highest quality connections are those with a tightly distributed average delay of less than 100ms. Poorer quality connections tend to experience longer delays and greater jitter. For example, satellite transmissions often average near 350ms, with a range from 150ms to 600ms. Pure throughput assumes no jitter and only propagation delay. Since effective throughput is an actual measure, it can reflect both switching and queuing delays. A series of effective throughput observations are needed to measure jitter.
Before proceeding to a computer modem example that clarifies the entries in Table 4-2, it is important to understand the second ingredient of the complex mixture of speed and QOS, the transmission (or signal) technology. First, the word broadband has to be defined. Broadband is used to refer to a range of capacity (or speed). Broadband also is used to describe a particular analog signal transmission technology. Egan characterizes the speed and capacity connotation as follows:
The term broadband is used to describe a high-speed (or high-frequency) transmission signal or channel. It is the functional opposite of narrowband, which connotes relatively low speed. While large-scale telephone network trunk lines always operate at broadband speeds, local phone lines connecting households and small businesses to the trunk line are limited to narrowband speeds. . . . The transmission speed of a broadband communications channel is usually measured in megabits per second (Mbps). . . . A low-speed narrowband channel, like today's basic phone line is usually measured in kilobits per second (kbps). . . . [Egan, 1996, p. 6]
The FCC defines broadband as an "elastic" definition that covers bandwidths of 200kbps and over [FCC, October 1999, p. 16-17]. The difference among broadband, narrowband, and wideband is subjective. Broadband networks most frequently are defined as "capable of multi-megabit speeds" [Sheldon, 1998, p. 112; Kumar, 1995, p. 185] or those with capacities greater than 2.048 Mbps [Klessig and Tesink, 1995, p. 1]. However, broadband is also taken to mean "any data communications with a rate from 45 to 600 Mbps" [Peebles, Keifer, and Ramos, 1995]. Wideband typically includes capacities greater than the typical (narrowband) analog telephone line of 4kHz but less than broadband capacity [GSA, FED-STD-1037C, 1996, pp. W-3-4].
Broadband also describes a specific signal technology where signals themselves (rather than bandwidth) are classified as baseband, broadband, and carrierband. This second use of broadband refers to a type of analog signal transmission technology (typically with a digital source domain) using shared lines. Two other signal transmission technologies are baseband and carrierband.
Broadband signal technology sends multiple analog signals over a range of frequencies (in channels, similar to radio frequencies) over shared conduit. Since noise tends to accumulate in such a scheme, amplifiers are used to regenerate attenuated signals. Because multiple channels are available, many messages may travel at once over a broadband transmission link without automatically exhausting available bandwidth (capacity). Broadband signal technologies distribute modulated data, audio, and video signals over coax, twisted pair, or fiber optic cable. Broadband is the easiest way to deliver a common signal to a large group of locations. Without the addition of electronics that limit eavesdropping, it is possible (with the right knowledge and equipment) to intercept broadband transmissions.
Broadband includes networks that multiplex multiple, independent networks onto channels in a single cable. This may be done through FDM (frequency division multiplexing) where two or more simultaneous and continuous channels are derived by assigning separate parts of the available frequency spectrum (bandwidth) to each channel [GSA, FED-STD-1037C, 1996, p. F-17]. Under FDM, signals that are coincident in time are separated in space so that a particular subscriber receives some of the total bandwidth of the connection that serves their location all of the time.
Broadband network technologies allow many networks or channels to coexist on a single cable. Broadband traffic from one network does not interfere with traffic from others because each network uses different radio frequencies to isolate signals by vibrating each signal at a different frequency as it moves over the conduit. Used in this sense, broadband is the opposite of baseband, which separates digital signals by sending them at timed intervals. A broadband subscriber receives analog signals over his own channel on a shared link while a group of baseband subscribers receives common digital signals over a link that is not shared.
Baseband signals are digital (always having digital source domains as well), requiring all of the available bandwidth in a single, shared connection. Broadband signals are analog (having either analog or digital sources) and move over unshared channels on a shared connection [Hill and Associates, p. 303.1.5, 303.1.7]. Carrierband signals are a hybrid of the two.
Baseband signals are used to connect stations on Ethernet LANs and in localized point-to-point or dedicated circuit communication (such as ISDN and HDSL). Baseband signals are transmitted without modulation on a carrier wave, having been digitally imposed on a single base frequency [FitzGerald and Dennis, 1999]. Baseband transmissions use repeaters to regenerate bi-directional attenuated signals. However, even when using repeaters, baseband transmissions are limited in distance compared to broadband. For this reason, baseband signal transmission is frequently used by CPE networks such as LANs.
Any guided medium can be used for baseband signal transmission. The baseband of a broadband signal is the original frequency range before modulation into a more efficient, higher frequency range. Baseband network technologies use a single carrier frequency range (channel) and require all stations attached to the network to participate in every transmission. However, only one digital baseband signal (using the entire capacity of the channel) at a time is permitted. Time Division Multiplexing (TDM) is used to allow users to share connections by taking turns so that all of the bandwidth is used some of the time. Under TDM, a digital signal that is coincident in space is separate in time.
Carrierband is a type of baseband technology where the signal is modulated before transmission over a baseband connection. Standard baseband transmission is un-modulated but is multiplexed to allow multiple transmissions to occupy the path at once. Carrierband technology uses signals that are modulated but not multiplexed so that the entire bandwidth of the channel is available in separate channels (for separate uses) for a single subscriber such as an agribusiness. Individual subscribers send and receive a mix of digital and analog signals over a dedicated (un-shared) link that can carry more than one kind of traffic at once. For this reason, carrierband is sometimes called single-channel broadband and is used in HSLN (High-Speed Local Networks) to link "mission critical" processors to each other or processors to peripherals [Maguire, 1997]. Carrierband technology is also used as a digital access level connection, such as in certain DSL services.
At the access level, cable TV broadband users share capacity for Internet access with dozens to hundreds of other subscribers over a broadband network. Telephone company digital lines such as DSL and ISDN serve individual subscribers only. Of the three signal technologies, agribusinesses are affected differently by each one. Baseband technologies are commonly part of Ethernet LANs at agribusiness offices. Broadband technologies support services that are offered by Cable TV and wireless firms, typically to small offices and farm residences. Carrierband technologies are sold to agribusinesses by ILECs, ALECs, ISPs, and others but are subject to distance limitations that will be covered in 4.3.
The third and final ingredient in the complex mixture of capacity, speed, and bandwidth is directionality or symmetry. For example, the upload and download bandwidth to bps relationship for a digital source to analog signal conversion depends on both the Nyquist Theorem [1924, 1928] and Shannon's Law [Shannon, 1948]. Furthermore, various sampling, symbolization, and encoding schemes are used by DCE to modulate the signal in order to impose or pack the message content into an electromagnetic pulse or onto a carrier wave. These concepts can introduce directional asymmetries that get specific attention in the next example regarding computer modem communications.
4.2.2 Computer Modems: Bandwidth and QOS
An example concerning the ubiquitous computer modem will further differentiate bandwidth from other QOS metrics. Modems are a particular kind of DCE designed only to work with analog signals and digital sources. Indeed, some say that the modem is a DCE that has reached maturity and is not part of the converged future. However, since modems are broadly representative of DCE (while computers represent DTE) the following example gives a simple lesson in bandwidth, speed, and analog-digital conversion that has broader applicability.
The modem is still a dominant form of connection to the Internet and data networking for households and for many agribusinesses as Figure 4-3 reveals. In 1999, Neilsen NetRatings found that fewer than six percent of U.S. households had Internet access faster than that offered by 56k modems [CyberAtlas, 2000].
Indeed, a majority of households (53%) own modems that are slower than 56 kbps. Additionally, modems that perform at relatively high data rates in urban settings may perform poorly or not at all in rural areas. Thus, the expected obsolescence of the modem will not occur for three to five years for many Florida agribusinesses because of infrastructure problems [FPSC, 1999]. Chapter 5 will provide more details.
The bandwidth of the analog voice channel (telephone line) as shown in Figure 4-4 is 3200Hz wide (3500Hz-300Hz). This bandwidth was engineered decades ago by AT&T's Bell Labs (when bandwidth was considered to be an extremely scare resource) as the absolute minimum needed to adequately represent the human voice.
Natural speech is concentrated between 75Hz and 8000Hz, though the human ear has a range of recognition from 75Hz up to 20kHz [EAGLES, 1997]. When the nationwide Bell System was built, filters were placed to block frequencies above 3500Hz from being carried on local lines. A modem attempts to transmit at the highest operational speed (bit rate) that it can given this bandwidth limitation, the equipment it connects with, and the amount of noise present in the line and introduced in the conversion process.
Analog-digital conversion (ADC) produces line noise as a byproduct and is, therefore, more sensitive to bandwidth limitations than DAC. It is for this reason that the so-called 56k modems (now generally known as V.90 standard modems) have a higher download than upload speed. Before this asymmetry of speed is covered, it is important to understand the steps in each conversion process.
ADC takes a continuous (in time and amplitude) waveform and converts it to a time-discrete, but amplitude-continuous series of pulses. There are three steps (sampling, quantizing, and coding) in ADC. The first step, sampling, begins the process of converting the waveform into pulses by sampling amplitude values every T microseconds (?s). During sampling, the waveform is scaled into sampling intervals where the sampling frequency, 1/T is also known as the modem's symbol rate.
According to the Nyquist Theorem, the symbol rate (also called the baud rate) used must be less than twice the bandwidth. So, for example, a 3200Hz (cycles per second) signal could be sampled with a frequency of up to 6400 symbols per second. Thus, T the size of a single sampling interval (symbol) is 156.25 ?s, which is the minimum constant interval at which the modem samples. In reality, 3200 baud is a commonly sampled rate because 6400 baud is unattainable in practice. Each sample becomes a separate PAM (Pulse Amplitude Modulation) pulse.
The Nyquist Theorem further specifies the levels to be used in the next step of ADC, quantization. Given bandwidth W, then the highest signaling rate C in a noiseless channel is given by C = 2W log2M, where M is number of coding levels [Maguire, 1997, Module 5, p. 13a]. The number of levels equates to the number of bits per quantization level. In quantizing, digits are assigned to the sampled signals by rounding off (quantizing) the PAM pulses using non-linear companding schemes. When 256 discrete amplitude levels are used to compand the original signal into a quantized sample, eight bits (log2 M) are necessary. The number of bits per level is log2 M, so that a 512 level representation contains exactly nine bits, etc.
The third step is coding. In coding, the amplitude levels are mapped into bits that can be understood by digital DTE or DCE. The standard coding rate is eight or nine bits. Given the 3200Hz bandwidth and an eight bit coding scheme (based on a 256 level quantization), the highest bps to be expected under the Nyquist Theorem from a noiseless telephone line would be 51.2 kbps.
However, two forms of noise reduce the maximum rate in practice. The first of these, quantizing noise, results from the second step and increases with the number of levels. Hence, it is not possible to add quantizing levels ad infinitum or the resulting line noise would cause data rates to fall. Once quantizing noise has been added to the line, it remains there (slowing the rate down) even after digital conversion has been accomplished. This is a particularly limiting case for sending (but not necessarily for receiving) data over a line. To discourage added noise, only the most robust levels can be used to code with so that in practice 256 levels may translate to 128 [3com Corp., 1999, p. 4]. Other forms of noise on lines (a particular problem in rural areas) affect speed in each direction.
Shannon's work on coding led to the development of Shannon's Law which states that the maximum speed C, equals W log2 (1+(S/N)) bps, where S/N is the un-standardized signal to noise ratio. The signal to noise ratio (SNR) is commonly measured in dB (decibels) where the SNR equals 10 log10(S/N) dB. The 33.6 kbps upload bit rate (maximum operational speed) advertised for both V.34+ and V.90 modems (over 70 percent of those in use) would require a SNR of 31-32dB for such a speed to be achieved. That SNR is far higher than that typically found, even on lines with a short distance to the serving CO. Thus, noise and the engineered bandwidth in Hz are the chief limitations on the usable capacity of analog line in bps.
To review, the baud rate multiplied by bits/baud (the coding rate) yields the bit rate (operational maximum) of the modem or other DCE. However, high line noise and the presence of noise from quantization itself place limitations on the achievable data rate. When a 28.8 kbps bit rate modem is advertised, it usually is a 3200 baud unit with a signaling (coding) rate of 9bits/baud. Higher speeds are obtained by increasing the baud rate, but bauds above 3429 are rarely obtained in practice due to quantization and/or line noise. Rural telephone lines are notoriously noisy, partially due to the age of the copper lines and also because noise is a function of the distance to the telephone CO.
DAC reverses the steps of ADC. First, the digital bits are modulated at a certain signal speed, governed as before by the Nyquist Theorem. Then, interpolation is used to reconstitute an analog profile of the digital signal pulses. Finally, the digital pulses are decoded into analog waveforms for analog transmission.
Importantly, physical laws preventing speeds greater than 33.6 kbps on upload are less constricting on the download side if certain conditions are met. An important reason is that quantizing noise is not present in DAC. To see this, consider how modems are used to connect with the Internet, a process that is illustrated in Figure 4-5.
Moving from left to right, a modem transmission (upload) follows a path from the customer premises (1) over the access level to the local serving central office (2) of the telephone company. While most calls to modems are local, they commonly are made to ISP numbers in a different telephone exchange with a different telephone CO (3), so the transmission flows over the telephone company's transport level to reach the ISP's CO. Then, the call flows over the ISP's telephone access level to reach a modem bank at the ISP premises (4).
An upload consists of no fewer than four signal conversions, labeled one through four. A download consists of no fewer than four conversions, labeled negative one through negative four. Of all eight conversions, only three (shown in bold as 2, -2, and -4) are ADC conversions that result in noise that would affect the signal in any direction. The last ADC (-4) occurs once the signals are no longer on the wire, so it does not create quantization noise on any line.
Another example, with a 56 kbps modem, shows the directional limitations of DAC and ADC. When the 56 kbps modem calls over a special digitally equipped telephone line and hooks up with compatible equipment, Shannon's Law and the Nyquist Theorem do not vanish. However, due to the absence of quantizing noise and the removal of ADC on the opposite end, download baud rates of up to 8000 may be negotiated with a seven bit per baud coding rate. The 56 kbps modem process is illustrated in Figure 4-6.
Figure 4-6 differs from the V.34+ (Figure 4-5) case in one obvious way. The ISP has purchased a special digital line (a channelized T-1 or an ISDN-PRI T-1) that allows the V.90 equipment at the ISP's premises to interact directly with the telephone company's transport network. Instead of a telco POP (Point of Presence) on the ISP end, a block arrow is drawn to show that the ISP connection is physically on the transport network. In addition, only one conversion (2) is in bold since that is the only ADC conversion that creates quantizing noise that would follow it to the Internet site at which uploading would occur. The download path is free from quantizing noise on the line. Of course, other kinds of line noise would still be present, creating an important reason that 56 kbps Internet access is spotty or even completely absent in many parts of rural Florida.
By now, it should be clear that bandwidth is both a practical and a theoretical measure. Shannon's Law and the Nyquist Theorem provide upper theoretical bounds on channel capacity, while modem advertisements claim a lower theoretical number (an operational bit rate), perhaps more realistic, but still abstract from reality. Line noise and the CPE (modem or other device) on the other side are additional variables. For this reason, modems and other DCE are designed to operate at many different bit rates (operational speeds) because of the variability involved. During the course of a session, modems (like other CPE) must synchronize rates with the bit rate and line conditions on the other end. Hence, during each session, a 56 kbps modem will dynamically adjust (negotiate) upload bit rates of from 300bps to 33.4 kbps (and many levels in between) and download bit rates from 56 kbps down to 300 kbps.
However, the data rate of an actual transfer varies from the advertised bit rate even under low noise conditions because of other sources of variability, the manufacturer, and model. Figure 4-7 shows test results for thirty-one modems Data Communications magazine lab tested over a six thousand-foot local loop under controlled conditions ["V.34 Modems: You Get What You Pay For." Data Communications magazine, 1995]. Note that the different brands of V.34-Plus (ITU standard) modems did not perform identically under the three tests, although line conditions were the same throughout. V.34 modems are designed to have upload bit rates of up 28.8 kbps and download bit rates of up to 33.6 kbps.
The top line shows the throughput for a (one-way) text download. Due to compression, throughput exceeds the advertised bit rate, the popularly defined "bandwidth" of the telephone line. The middle line of Figure 4-7 shows the two-way average throughput based on binary compressed files. Again, in every case but one, the observed throughput exceeds 28.8 kbps, but that is comparing an end-to-end throughput to the maximum operational data rate (advertised maximum bit rate). Finally, the bottom line shows the average two-way binary file transfer throughput. Notably, while every brand was advertised as a 28.8 kbps modem (able to transmit and receive), the top two had maximum average binary throughputs of 27.4 kbps. When the tests were repeated over different lines (30,000 foot rural local loops), results were as much as sixty percent below Figure 4-7 (when a transfer could occur at all) [Data Communications, 1995].
Figure 4-8 shows that the results for the most recent 56k modem standard V.90 indicate somewhat less variability among readings than the 1995 tests of Figure 4-7.
The one-way (download) text throughput rate (shown on the right y-axis) approached 120kbps on two of the modems shown. While no modem had a data rate of 56 kbps for downloads, the striped columns show that all were above 40kbps. On the upload side, most experienced just under or just over 30kbps data rates as shown by the solid columns.
When modems communicate over networks, the observed data rate and throughput also depend on network QOS variables. Figures 4-9 and 4-10 show how variability in telephone line noise, network congestion, and other factors prevent the advertised "bandwidth" of a modem from being observed in data rates in practice.
Instead of showing laboratory test averages, Figures 4-9 and 4-10 compare incoming data rates (experienced by the author) from the same web site (un-cached in memory) using the same modem and connection. Readings were taken minutes apart, using AnalogX NetStat Live version two software. In Figure 4-9, the average data rate was 13.5 kbps. For Figure 4-10, the average data rate was 17.9 kbps. However, in addition to the fact that in one case the same size file loaded many seconds faster, there were different accelerations in data rates as each transfer proceeded.
The results cannot be explained by any one factor. The differences may be due to an interaction among line noise, other simultaneous requests on the remote web site, the different routes taken by the individual packets of each transfer, ISP network load, and overall Internet congestion.
The computer modem examples have demonstrated several points. First, the theoretical bandwidth of a connection (in this case a telephone line), while often equated with the advertised bit rate (operational speed), differs from the throughput experienced by users and also from the measured data rate of the connection. Furthermore, differences due to manufacturer, line quality, and network conditions affect QOS metrics.
Until now, the focus has been on QOS metrics that are popularly mistaken for bandwidth. However, QOS metrics must be able to handle the full set of simple point-to-point and inter-networked connections of an agribusiness hypercommunications network (that may handle hundreds of simultaneous requests for voice, video, and data). Before more general QOS metrics (including many that are completely unrelated to bandwidth) can be explained, a reference model that is more general than the simple modem example is needed.
4.2.3 QOS Reference Model
At this stage, it is helpful to recall the three network engineering problems and the four objectives of network management from Chapter 3. Any network has combinatorial, probabilistic, and variational problems of engineering. These are sometimes loosely called possibilities, probabilities, and the positive and negative synergies between the two. The way an agribusiness and its hypercommunication vendors define and approach these three engineering problems determines how effectively communications occurs and how efficiently hypercommunications dollars are spent. Additionally, the way the four network objectives are achieved by the carrier influences the efficacy of the agribusiness's connection and the cost efficiencies of the carrier. While the modem example showed that not everything is under either party's control, a QOS reference model will help isolate who controls what.
Instead of two modems, consider a more general agribusiness network such as that in Figure 4-11. Assume that the network is a true hypercommunications network that can carry voice, video, data, fax, e-mail, and Internet traffic. Communications occurs not only among and within offices, but also with customers, employees, and suppliers both nationally and internationally. Notice that the largest node (location) in the fictional agribusiness network depicted is at the headquarters in Loxahatchee, Palm Beach County. Another office is in Homestead, Miami-Dade County. Two smaller locations are in Chuluota, Seminole County and in Southern Highlands County. Suppose that six point-to-point links connect the hypercommunications traffic among offices. Each link could use one or more services and one or more technologies to carry traffic.
The HQ-Homestead, HQ-Chuluota, and HQ-South Highlands links are bold to indicate that a greater amount of traffic flows on these three main links than through the three others. To link any two locations in Figure 4-11, an end-to-end communications pipeline must be able to handle downstream, upstream, and two-way traffic. Users may communicate through computers, telephones, fax machines, or other DTE.
This is further illustrated by summarizing the elements of a single connection between two points as in Figure 4-12. The figure shows various CPE (Customer Premises Equipment) the agribusiness owns in Loxahatchee and in Homestead. On the Loxahatchee end, workstations, storage, and mainframe devices are shown on the uppermost part of the diagram. On the Homestead end, PCs are shown on the upper end of the local network. This configuration between the two locations suggests that traffic will mainly be downstream (from HQ) data traffic as data base information at HQ is accessed by Homestead users. However, data traffic between the two points could just as easily consist of an upstream flow at another part of the day when, for example, sales numbers or other reports are sent to HQ. Two-way traffic could occur between telephones at each location (as shown in the middle part of the local network on either end) or through a mix of computer and telephone traffic (shown in the lower part of the local network on either end). Figure 4-12 shows how varieties of devices (connected in local networks at both locations) allow a mix of traffic to flow between the locations.
However, Figure 4-12 is both too general and too specific to serve as a general model for use throughout the Chapter. It is too specific because (in addition to the direction of transmission) it mentions numerous user devices such as telephones, faxes, data storage, etc. Figure 4-12 is too general because it appears as though the communications pipeline between Loxahatchee and Homestead can only be a point-to-point link, rather than a more broadly defined network connection.
Figure 4-13 reveals six essential parts of a hypercommunications link between users at HQ and users at any other office without differentiating among services and technologies, or specifying network switching. The six parts are: carrier network (1), carrier POP (2), POP-edge access link (3), edge device (4), inside network (5), and user device or DTE (6).
It is easiest to work from the inside out of Figure 4-13 to describe the six essential elements. Assume that two-way, upstream, and downstream traffic may be carried between the two points, though each type may be handled differently depending on the service or technology. By formulating the model this way, it is clear that bandwidth is not the only QOS characteristic an agribusiness will be interested in, because bandwidth describes the separate capacities of parts (1), (3), and (5). Each essential element also represents a possible threat to the security and reliability of communications.
The first essential element of a hypercommunications link is the carrier network (1). The carrier network, often called a backbone, is depicted as a cloud because there are many different paths and switching technologies that may be used to transport traffic through the backbone. Usually, the backbone is national or international in scope, carrying traffic for thousands of carrier customers. Bandwidth of backbone networks can range from T-3 (45 Mbps) and upwards over fiber optic cable, satellite, or microwave pathways. While the bandwidth of the carrier's backbone is likely to be more than adequate for the needs of individual customers, congestion can occur if the backbone is oversold or mismanaged by the carrier. Importantly, this central network is also the path over which communications with the outside world occur, though the reference model now is focused on Homestead to Loxahatchee. Carrier networks are covered in more detail in several parts of Chapter 4 including 4.3.4 (data and voice transport), ATM (4.8.3), SONET (4.8.4), and Internet transport (4.9.1). In many cases, carrier networks are already converged networks that handle both voice and data together, though often through parallel structures.
The second essential part of the reference model is the carrier's local POP (Point of Presence). POPs are locations where connections serving a particular area terminate so that traffic can be placed onto the backbone. For example, the POP that serves Homestead might be located in Miami and the POP for Loxahatchee might be located in West Palm Beach. The wireline (or wireless) distance from an agribusiness location to a POP can determines if a particular service can be offered. While each POP is a potential bottleneck, the DCE at a POP typically can handle far more bandwidth than a single customer needs. However, if a POP is oversold or mismanaged by the carrier an agribusiness may not be able to communicate. Carrier POPs are discussed in the context of 4.3 (wireline transmission) and 4.4 (wireless transmission). They appear again as the discussion turns to specific services such as CO technologies that support enhanced telecommunications (4.7.2) and Internet access (4.9.1).
The third essential element is the POP to edge device pipeline (3) that transmits messages between the each local POP and its corresponding agribusiness location. This element is often called the local loop or "the last mile", even though it may be far longer than a mile. Depending on the technology or service, the last mile may be wireline or wireless and be called a link, local loop, circuit, or path. With broadband signaling, the capacity of the connection from the business' edge device to the POP is shared with other customers (like a telephone party line), while with baseband or carrierband signaling it is dedicated to a single user. Depending on which is the case, there may be a potential for congestion. Access conduit is discussed in 4.3.1 and under infrastructure in Chapter 5. Access loops (the physical connection used to support various services) are given further coverage in discussions of the telephone infrastructure (4.3.2) dedicated circuits (4.7.3), and of circuit-switched digital services (4.7.4).
Fourth, edge devices at each location (4) enable that location to connect to the pipeline. Inside a business' wiring closet, also called the MDF (Main Distribution Frame), an interface is needed to link the carrier's transmission network to the network inside the business. From this point outward, all equipment is CPE (Customer Premises Equipment) because it is physically located at the agribusiness's site and owned or leased by the agribusiness. Edge devices can be interfaces or ports as simple as a telephone jack, or as complex as a T-1 NIU (Network Interface Unit) or CSU (Customer Service Unit). Edge devices must be compatible with the outside carrier equipment and the inside CPE equipment. Edge devices are most important as specific enhanced telecommunications CPE (4.7.1) and circuits (4.7.3 and 4.7.4) are covered. Specialized edge devices are used for private data networking and Internet services as well.
Moving out from the edge device in Figure 4-13, the fifth essential element in the reference model is the inside (local) network at each location. The local network controls how communications travel from the edge device to the user's device as well as how communications between people at that location travel. The local network includes conduit (wiring and cabling), network hardware, and network software. Regardless of the carrying capacity of the external parts of the hypercommunications link, the management, traffic level, and capacity of the local network can prevent an agribusiness from using the bandwidth it pays for or achieving desired data rates. Local network hardware and software must be compatible with the edge devices and with the carrier for reliable service to be expected. A general local hypercommunications network must be distinguished from a local area computer network (LAN) described in 3.5.3 and 3.5.4. However, voice-data consolidation technologies (4.5.4) and call center technologies (4.7.2) can be used along with Internet technologies to create converged networks. Local conduit is discussed in 4.3.1 for the wireline case and in 4.4 and 4.8.5 for wireless cases.
The sixth and last essential element of the QOS reference model is DTE. DTE include telephones, computers, fax machines, and other hardware directly used by people at either end to communicate. DTE must be compatible with local network hardware and software. Each device has its own individual capacity to send and receive communications. User devices on each end must either be compatible or have intermediate CPE to allow interconnection. In some cases, a single malfunctioning user device can slow or completely obstruct communications across the link.
Figure 4-14 ties these six parts together into a general QOS reference model to be used for the rest of Chapter 4. Note the three levels of the reference model: customer premises, access, and transport. The customer premises level is entirely under the control of the agribusiness at each location.
The access level is the connection between the agribusiness and the carrier network from the edge device at the agribusiness to the network gateway at the carrier's POP. The transport level (as shown here) is a single carrier's network, but it may be several interconnecting networks owned and managed by separate carriers depending on the distance from receiver to sender.
4.2.4 Fifteen Dimensions of QOS
Now that the reference model is established, the chief QOS dimensions of interest to agribusinesses can be considered. QOS is one of many acronyms in hypercommunications, but is perhaps the most important being one of the twelve essential hypercommunication terms mentioned in Table 1-1.
At first glance, QOS seems to be a simple concept, having to do with measurable degrees of quality customer satisfaction. However, hypercommunication service quality is affected by many issues such as: system reliability and redundancy, customer service and billing, the usefulness and availability of technical support, as well as a number of engineering parameters, software-hardware bugs and idiosyncratic events. QOS concerns apply to all three levels of the reference model (access, transport, and network), but most SLAs cover the transport or transport and access levels only. This is since the carrier cannot be expected to control situations that are on customer premises.
Often, each area has many specific parameters that may or may not be measurable. Indeed, the method of measurement, software package, level of measurement, periodicity, and who will do the measuring are themselves important issues. However, there is not enough space here to delve into the methodology of measurement by dimension.
Table 4-3 introduces fifteen QOS dimensions of greatest importance to agribusinesses. The first six QOS dimensions (bandwidth, bit rate, data rate, throughput, delay, and jitter) have already been introduced in 4.2.1 and were covered briefly in the modem example in 4.2.2.
Sources: Maguire, 1997; Sheldon, 1998; FitzGerald and Dennis, 1999.
The seventh QOS issue, connection establishment delay, refers to the length of time it takes to establish a connection. Like many of the other dimensions, it depends heavily on the user's perception, the specific service, technology, and DCE and DTE. The simplest example is the length of time it takes for a modem to dial and establish a satisfactory connection to the Internet. It also is applicable to the length of time it takes a telephone call to complete. Some services (such as DSL, cable modem, and T-1 dedicated circuits) are "always-on" and hence, have no connection delay. The eighth QOS category, connection establishment failure probability, refers to the probability of getting a busy signal when a modem dials AOL, for example. In that case, there has been a failure to reach the Internet. For other services, the definition is similar.
The ninth dimension of QOS, network transit delay, refers to the average delay in ms that it takes for a bit to transverse the carrier's network. Unlike end-to-end delay (which is observed), network transit delay is a statistical average that does not count delay resulting from access loop or customer premises conduit and equipment. The carrier may not have control over those other forms of delay, but will have control over transit delay. Hence, most SLAs or carrier statistics that describe delay refer to transit delay (transport level) only. Similarly, most jitter statistics refer to transit jitter, rather than jitter based on end-to-end delay. It can be important as to whether the transit delay and jitter measurements are averages between two points, weighted averages across an entire network, or measured in some other way.
The tenth dimension, error rate, may be a residual measure (on average equal to zero) for some services and an efficiency measure for others. The specific transmission technology and switching method determine the importance of the error rate. In packet switching, the error rate is a background rate that may slow transfers, but switching redundancies and error checking routines are able to recover or correct errors in most cases. Even if the error rate is high across a network such as the Internet, the communication may appear to have been error-free from the user's perspective. Other errors include lost, misdirected, or duplicate e-mails.
Security, the eleventh QOS dimension, is perhaps the most difficult to measure. Security refers to whether others are able (or may be able) to intercept or copy messages or to gain access to network. A security failure can occur at the message, user, or system level. Security failures on any level can result from actions taken within a business, from inside the carrier's network, or from outside.
A security failure on the message level refers to whether outside parties can intercept, modify, or copy messages routinely or intermittently. User level failures refer to whether an outside party can spoof or fabricate the identity of a user and use the communications system, or masquerade as a valid user and gain access that way. A third kind of security failure is a system level failure. System level failures include hacker-vandal attacks and viruses. Hacker attacks begin when outsiders spy on a system to gather confidential information in order to gain later unauthorized systemwide or administrative access to it. Vandals are hackers who maliciously damage systems. Vandals may attempt to crash the system, destroy stored information, or prevent valid users from using the system. Viruses (which do not need to be placed by hackers or vandals) are malicious or annoying software instructions that can do anything from disable a system to cause inconvenience to a single user.
Measuring security failures or violations can be difficult because they can be hidden easily and may not be discovered until after damage has been done. Hardware and software firewalls, security audits, and other methods are used to try to prevent, deter, or catch violations. It is important that a company and the carrier have a security policy that defines violations and prevents vulnerability even to a single point of failure. Vulnerability to a single point of failure means that one person or station (DCE and DTE) could compromise the entire system.
Another QOS dimension is priority classification ability. Priority may be established on a per message, per connection, per user, or a per service basis. The best price for a particular message or service request depends on capacity limitations on how many messages (or how much total traffic) may be sent at once, where, and in what form, and by whom.
The simplest example is the case of a business telephone network with line reduction. Under line reduction, the number of simultaneous incoming and outgoing calls is limited by the number of telephone line equivalents, rather than the number of employees or telephone sets. Since the purpose of a network is to share resources and lower costs it would defeat the purpose to allow each employee their own telephone line and computer, or to allow every telephone its own exclusive line. However, if all lines are busy, it may be that some users (such as the CEO) need guaranteed placement at the top of the waiting queue or the ability to place a call instantly if the network is full. Voice traffic can be prioritized in several ways.
Prioritization can be more difficult for data traffic or unified data-voice traffic. Certain services and technologies allow the carrier or customer-subscriber to establish priority traffic levels. The highest levels can be carried (for a premium price) at high speeds and superior reliability, leaving other levels to be sent later or get the electronic equivalent of a busy signal. For example, video and voice traffic may require real-time, low delay, and low latency connections that would be a waste of resources for e-mail or routine data transmissions.
Resilience is the thirteenth and possibly most difficult to predict QOS dimension. Resilience refers to the chance that any element of the network will spontaneously fail, causing the complete loss of a particular service(s). Resilience encompasses system reliability. While operational failure probability is concerned with the probability that a particular call or message will fail to achieve certain benchmark standards (such as annoying interference or repeated delay), resilience concerns a total loss of service or outage.
Outages can be complete (network wide) or localized (several connections or customers). Outages can also be random, intermittent, or singular. Singular (one time or episodic), complete outages can result from easily diagnosed causes such as accidental construction cuts of an optical fiber (causing loss of service over an entire carrier network), or they can result from complex equipment failures. As the name suggests, random events occur with no particular frequency and can result in localized or complete interruptions, usually for short periods. Intermittent events occur without apparent regular order, but are often due to complex software-hardware interactions that create hard-to-diagnose sequences of events that trigger partial failure. Intermittent events are more likely to be localized.
Closely related to resilience are environmental specifications and safeguards. Environmental specifications represent the expectations of engineers regarding the operating ranges or time-to-failure of DTE, DCE, and conduit. Environmental specifications include the length of continuous operation, temperature range, humidity range, and the resistance to environmental factors of specific equipment. Environmental standards of electronic equipment also include how well equipment can withstand lightning strikes, electrical spikes or jolts, and other hazards that can cause permanent or temporary failures of components or entire systems. Many wireless transmission technologies require the right atmospheric conditions to operate at peak performance or partial performance losses can occur due to cloud cover or complete interruptions can result from rain, wind, and electrical or solar storms.
Environmental safeguards include systems designed to handle indoor problems such as power outages, emergency hurricane operation, fireproofing, and workplace as well as outdoor environmental concerns. Safeguards may be needed on both the customer premises and carrier network. Some of these include emergency backup generators, surge protection, and redundant connections in case of failure of the main system.
Redundancy refers to how well system backups and contingencies are able to restore benchmark service using secondary or tertiary networks or carriers. The Internet, for example, is a network that was originally designed to remain functional even in case of global thermonuclear war due to built in redundancies. However, it is little consolation to an agribusiness if the Internet is itself still functioning when their ISP's modem bank has broken down, denying them access.
The overall failure probability is the chance that at least one of the fourteen other QOS dimensions will fail to attain the standard set for it. Overall failure may be thought of as the chance that anything will go wrong during a particular period. The probability of overall failure is derived using probability theory to obtain estimates of all independent and dependent events associated with each dimension.
4.2.5 QOS in Practice
Now that the fifteen dimensions have been sketched, it is important to understand more about how they apply in practice. A typical agribusiness may have multiple telephone lines, multiple locations, multiple users of the data network, and other variables that affect how QOS is conceived and measured.
Table 4-4 gives an overview of how QOS dimensions can affect agribusinesses in several ways. In the first column, each dimension is categorized in terms of the three core engineering problems and the four technical objectives of network managers from Chapter 3. The following codes are used into indicate the three core engineering problems: combinatorial (C), probabilistic (P), and variational (V). For the four network optimization objectives, the codes used are: sending rate control (SRC), conduit signal modulation rate (SMR), overall network optimization (ONO), and receiving flow control (RFC). Table 4-4 is a general but subjective guide to the relative importance of QOS dimensions.
While pricing may be based on bandwidth (a capacity constraint), users are directly affected by throughput and data rate instead. Indeed, the data rate or bit rates are constraints on the value of a service, along with reliability in general. Throughput is especially important in the pricing of DTE. For example, computer speeds and memory capacities have a direct relationship to what users actually experience as throughput. The bit rate affects the pricing of DCE such as modems and DSU/CSUs. When modems are advertised at 56 kbps, this only means they are capable only of that as a maximum operational rate. For larger agribusinesses with many locations, the network transit delay of the carrier may be important to price and SLA negotiations.
The kinds of traffic most affected by any QOS dimension also involve many details such as message primitive, traffic mixture, and network load. Generally, interactive services such as voice or video are most sensitive to delay and jitter because they are real-time or conversational in nature. Not surprisingly, connection-establishment delay and establishment failure probabilities influence all kinds of traffic carried by services that require the establishment of a connection. Internet traffic (web pages, FTP, e-mail, Intranet, etc.) and computer network traffic are listed as most affected by security, though most firms have a higher dollar loss from misuse of long distance telephone calls.
The impact of problems in each dimension on individual users and firms are shown in the last column of Table 4-4. Some firms have been destroyed by hacker attacks on un-backed up data, or by revelations of proprietary information and conversations to competitors or the public. Resilience is especially important to the firm, because a violation conveys the complete inability of some or all parts of corporate communications to occur. A business that relies on telephone calls from customers can hardly afford to have calls lost.
Often, when carriers advertise ninety-nine percent or 99.9 percent reliability, they are speaking about resilience. However, with 8.7 thousand hours in a year, a 99.9 percent reliability guarantee allows almost nine hours of total loss in communication to occur before a hypercommunication carrier can be said to have violated that agreement [REA, 1992, p. 1-13]. If those nine hours come during harvest or a heavy season for an agribusiness, (indeed some failures may be most probable at high use levels) they can be devastating. Similarly, if ninety-nine percent reliability is quoted, over 87 hours without service are permitted. Except in rare cases, business losses due to a communications failure are not covered by SLAs, though part of the carrier's bill may be.
There are other QOS dimensions besides those covered. For instance, there are a host of engineering parameters that are set to establish service benchmarks. A carrier has a tradeoff between keeping costs and prices down and ensuring customers a high probability that the system will not be so congested that it is unavailable. The conflict between lowering costs and losing customers is inherent in hypercommunications production. Other engineering benchmarks include intermediate DCE settings that enable interconnection variances below the j.n.d. (just noticeable difference) of consumers or that affect only a proportion of the customer base. Hardware-software bugs and system downtime are frequent possible threats.
Other QOS customer concerns include customer service, custom billing, and technical support. The methods by which customers report problems or get information about system outages (along with how rapidly problems are corrected) are perhaps most important. Billing concerns and the availability, efficacy, and cost of technical support are other QOS concerns. Access by customers to user-friendly technical support is a potential source of carrier revenue either through keeping customers from leaving in frustration or through billing for tech support. However, technical support is a carrier cost as well because twenty-four-hour, seven-day service is expensive to provide, especially when some customers overuse tech support.
Before concluding the section on QOS in operation, five topics merit brief coverage. First, any measure of speed depends on the acceleration allowed by various protocols based on file size and type. For example, streaming media protocols (used for voice calls, audio, and certain compressed video files) use a fraction of available capacity to transmit communication. Where an entire video or audio program is to be transferred (such as over the Internet), the protocol bit rate prevents the data rate from rising to consume all available bandwidth. Files are sent not as a unit, but as a bit limited stream so the download will occur within a fraction of capacity.
Other kinds of transfers experience limitations on acceleration. Limited acceleration can mean that data rates will never rise to meet capacity. Many businesses have short bursts of traffic with relatively small individual data (file) exchanges. Hence, a firm with a high total traffic volume is punished by limited acceleration if it rarely exchanges multi-megabit or gigabit data files. Limited acceleration means that if large binary data files are transferred, only then does the protocol between sending DCE and receiving DCE allow the bit rate to change to accelerate to the available speed within a channel.
Two figures demonstrate how acceleration works in practice. In the first figure, (Figure 4-15), the data rate accelerates quickly and remains at 1.4 Mbps (1400kbps) during most of the file transfer. In Figure 4-16, the bit rate does not limit the acceleration in data rate, so that the larger the file, the faster the speed becomes. Clearly, it may matter if a particular carrier, CPE, DCE, or service supports one or the other type of acceleration. In some cases, the highest speeds are reserved for the transfer of large files only, rather than for other kinds of communication.
A second operational topic concerns the profile of total traffic through the business day. Rather than measure the speed of a single transfer as in the last two figures, QOS metrics are used to assess whether too much or too little capacity has been purchased by looking at all traffic combined. Many circuits (such as frame relay) use two ways to price: the channel rate and the CIR (Committed Information Rate). The channel rate is a data rate ceiling, while the CIR is a guaranteed data rate floor. To save money, an agribusiness may choose a low CIR, hoping that other customers of the carrier will not congest the circuit to obtain the higher channel rate without paying full price. However, if other customers of the carrier had needed the difference between the channel rate and CIR, the budget-minded agribusiness would have been denied it.
Figure 4-17 shows a capacity of 1.544 Mbps, a common limit encountered with frame relay, T-1, and ISDN-PRI circuits. The maximum, minimum, and average hourly traffic levels of a representative weekday (data rate of all users over a connection) are shown. In this case, the CIR is 256 kbps. The average maximum hourly traffic from or to the agribusiness exceeds the CIR from just after 8 a.m. until 9 p.m. The profile could be the result of a mix of voice (lumpy increments) and data traffic (bursts), but the hourly averages do not show the differences between the two traffic types very well.
The low average hourly traffic line (bottom bold line) shows that even at low usage levels, the CIR is exceeded around 11:00 a.m. and from 16:30 to about 18:30 in the afternoon. The maximum hourly traffic rate is at the capacity from around 16:00 to 17:30, while the average hourly traffic rate has a morning and an afternoon spike at over 400kbps and over 600 kbps, respectively. These results suggest that a greater CIR should be obtained (unless the firm wants to risk connection failures) or that automated traffic must be spaced more widely throughout the day, or both. Network planners often base the channel rate on the maximum busy hour, but the optimal CIR depends on the carrier's other customers.
However, as Figure 4-18 reveals, the average hourly rate masks more dramatic minute-to-minute averages. Figure 4-18 focuses on part of the business' peak time (busy hour) in the late afternoon, from 16:30 to just after 17:00. While the average hourly figures suggest that maximum average needs exceed the 1.544 Mbps capacity for almost two hours per day, the minute-to-minute figures suggest that only about eight minutes does this occur. Indeed, a close examination of the minute data reveals that only eight minute-long periods have ever had averages that reached 1.544 Mbps. Some technologies and services allow extra bandwidth to be purchased above the contractual CIR and channel rates. However, a twenty-four hour advance notice may be required for such BOD (Bandwidth on Demand) offerings.
As the regular workday closes, traffic dips to low levels. However, just before 17:00 hours, traffic rises again. This gap represents the time of day when the IT department assumes that "normal" operations have decreased enough that automated, routine "end-of-day" data transfer can begin. The problem may be that rather than "needing" more bandwidth, IT personnel schedules and automation routines may need to be more spread apart, since the connection has plenty of extra capacity in late evening, overnight, and early morning hours. Importantly, there can be connection establishment delays due to being over the CIR that can also be involved.
Finally, Figure 4-19 demonstrates delay and jitter. The X-axis depicts the twenty hops an Internet packet takes to travel from Fort Lauderdale, Florida (Plantation) through Atlanta and Sydney, Australia to reach the Australian Government's Antarctica site at Mawson, Antarctica.
The bars denote the independent maximum, average (mean), and minimum delays (round-trip between Plantation and each intermediate router) in ms using the Ping Plotter ping-trace program from a sample of 1000 cases taken on 12/16/1999. The total delay statistics to Antarctica and back were 3520ms for the maximum, 554ms for the average, and 437ms for the minimum. Jitter can be measured in relative or absolute terms; for Mawson, 330 was the standard deviation with a standard error of 14.9.
Clearly, most agribusinesses do not require connections to Antarctica, but similar results can occur to places as near as Mexico or the Caribbean. However, the jitter example demonstrates three things. First, delay is sensitive to distance. Hence, the more distant a connection, the longer delay will be. Second, delay is sensitive to the service used. Satellite connections especially (used in the hop from SFX, San Francisco, to Australia, and in the hop from Hobart, Tasmania to Antarctica) introduce delay simply because of the distance from earth to ground and back again. Third, jitter is not inherently sensitive to distance. If the agribusiness had required connection through the first Atlanta hop, jitter would have been greater than if the connection had been in Australia. This is symptomatic of congestion at that hop. Fourth, in many kinds of service (especially Internet-based services), both the carrier and the customer have no control over network routing or the number of hops. Each hop increases the chance that a communication will experience a QOS violation of some kind.
Unlike bandwidth, jitter is a constantly changing measurement that depends on the measurement program used, time of day, general network load, etc. Therefore, different readings would be observed on other days of the week, at various times of day, and with other software measuring programs.
A fourth operational topic concerns the tradeoff between regular file transfers and streaming file transfers as illustrated by Figures 4-20 and 4-21. In Figure 4-20, three kinds of files (text e-mail, web page, and graphic) are shown by their size in KB (KiloBytes) on the left axis, represented by columns. The shaded area shows the fastest length of time in seconds (on the right axis) that a modem (56 kbps) would take to download each file completely.
Simple text e-mails and web (HTML) page content (text and light graphics) take less time since they are smaller files than a graphic file of over 200KB. Each kind of file requires a complete file transfer to become visible at the destination, so the capacity and data rate help determine how soon communication occurs. Both the download time and the data rates vary.
For real-time or mixed traffic such as streaming media and telephone calls, there is another way to view the situation by allowing the data rate and time to remain fixed. Figure 4-21 shows the capacity needed to download a one minute voice call (using two compression schemes), a one minute streaming audio .WAV file, a one minute .RM (streaming) compressed video, and a one minute .MPG (non-streaming) video. Here the tradeoff is among competing conversion protocols, each of which requires a fixed data rate (on the right axis) over an entire minute to transmit the file size (shown in KB on the left axis).
In one minute, a 577 kbps connection could carry one MPG video, twenty-seven RM video files (21.33 kbps each), six WAV voicemails (86.6 kbps each), nine PCM (G.711) telephone conversations (64 kbps each), or thirty-six ADPCM16 telephone conversations (16 kbps each). These different uses of the same bandwidth are at the center of converged network allocation. Obviously, the video quality in frames per second of the MPG are likely to surpass that of the Real Media RM protocol, just as the PCM G.711 telephone quality will typically surpass that of the ADPCM compressed call.
Recall that throughput refers to the ability of CPE and DTE at both ends to transmit and receive compressed files. The rate referred to above would be an end-to-end throughput rate because coding shortcuts compress messages into fewer bits, enabling faster transport. The savings in speed can be dramatic, but they are due to software rather than increasing the data rate of the communications pipeline. The difference is apparent when two video files (MPG and RM) are compared in Figure 4-21. The RM file designation was developed by Real Audio, so it requires a particular software package to be installed on the each end. The RM video will have fewer frames of video per second than the MPG. However, the RM video stream offers equivalent (or superior) sound and will be seen almost immediately by the user who would have to wait for transmission of the entire MPG file to see it.
The last operational point to cover is symmetry. Figure 4-22 reveals how dramatic the differences between wireline and wireless hypercommunication technologies can be both in terms of advertised upload and download speed among technologies.
The dashed horizontal grid lines on the y-axis of Figure 4-22 are units of 1.544 Mbps (T-1 speeds). From left to right, the technologies delivered through copper conduit include: DSL Lite (1.544 Mbps download, 512 kbps upload), ADSL (10 Mbps, 768 kbps), ISDN-PRI and T-1 (1.544 Mbps, symmetric). Hybrid fiber-coax conduit can deliver VDSL (version 1, 13 Mbps, 2 Mbps) and Cable modems (3 Mbps, 400 kbps). The technologies delivered by fiber optic are T-3 (44.736 Mbps, symmetric) and OC-1 (51.840 Mbps, symmetric), though higher OC (SONET) speeds are available. The remaining technologies shown are wireless. Wireless technologies include mobile GSM (symmetric, 270.8 kbps), MMDS/MDS (1,544 Mbps, 768 kbps), 2.4 GHz (1.544 Mbps, symmetric), direcPC satellite (2 Mbps, 33.4 kbps), other satellite (400 kbps, 33.4 kbps), and LMDS upperband (27 Mbps, 3.5 Mbps).
Figure 4-22 reveals several patterns that will be seen again as Chapter 4 unfolds. First, fiber optic conduit surpasses wireless and other wireline conduit in overall speed and symmetry. Second, many of the faster technologies (wireless and wireline) are asymmetric, offering faster download than upload speeds. Third, none of these numbers can be taken as indications of actual data rates that will be achieved by agribusinesses in practice. With the exception of fiber optic conduit, both wireline and wireless technologies exhibit larger asymmetries, especially as speed increases. Finally, within each technology, dramatic variations from the speed estimates shown can occur because of scant real-world data and differences between carrier sales claims and actual experience [FCC 99-5, 1999].
To conclude, some argue that the trend towards uniform digitization means that bandwidth is becoming a commodity, based on the bit as a unit. However, such a view ignores six important points to agribusiness. First, not all bits are equal because information does not equal data. Furthermore, the transmission of bits (whether data or information) does not ensure communication. Third, no business advantage can be obtained if a particular location has insufficient digital infrastructure capacity so that needed hypercommunication services are not available. Fourth, the transmission of a message from a sender to receiver depends on message primitive, message type, and other factors discussed in Chapter 3.
Fifth, a real-time video cattle auction has different capacity and speed requirements from a text e-mail message, a web page, or a telephone conversation even if each is digital. It is often stated that all a business needs to do is to decide on how much "bandwidth" it "needs", and then hypercommunication needs will have been taken care of. Capacity (along with speed, reliability, latency, and how well the hypercommunication carrier manages the network) is important. However, when bandwidth is used (as it so often is) as a synonym for speed or as a commodity a firm buys in the market, other more specific QOS dimensions have to be controlled for. It can be a costly mistake to compare prices, services, technologies, and carriers based only on bandwidth.
Finally, understanding a set of QOS metrics rather than bandwidth alone can save agribusinesses from unexpected problems and extra costs. The kinds of QOS metrics that a particular agribusiness would agree to in an SLA with a carrier for access and transport differ from the QOS dimensions it would use for its own LAN.
The next three sections cover hypercommunication technologies applicable over all kinds of services. Wireline transmission technologies (4.3) carry hypercommunication signals over several kinds of wire conduit in their journey from sender to receiver. Conduit types include copper wires, cables, and fiber optics. A number of specific enabling technologies (such as DSL, ISDN, and cable broadband) support wireline transmissions. Wireless transmission technologies (4.4) carry hypercommunication signals over airwaves. Wireless transmissions use RF (radio frequencies), microwaves, infrared, and other methods as conduit. Specific enabling technologies support a range of services over wireless as well. Then, section 4.5 covers support services, and facilitation and consolidation technologies.
4.3 Wireline Transmission Technologies
Three technologies are used to transmit hypercommunication signals from sender to receiver. The technologies that enable communication signals to travel over wire, cable, or fiber are known as wireline technologies. Wireless technologies exclusively use unguided media such as radio waves. The third way to transmit signals is over a hybrid system, a combination of wireline and wireless technologies. For example, a long distance telephone call may use microwave transmission over the carrier transport network, but rely on wireline technologies at the access levels at both ends.
The discussion in this section assumes that the communications pipeline is entirely wireline from end to end. During the wireless and facilitating technologies sections (4.4 and 4.5), wireless and hybrid wireline-wireless networks are considered. It is important to realize that most of the hypercommunication services covered in sections 4.6 through 4.9 can be provided through either wireline or wireless technologies.
The discussion in this section covers four topics. Sub-section 4.3.1 discusses guided media or conduit, the wire and cable that carry hypercommunication signals from sender to receiver. The next section (4.3.2) covers the physical plant of the (ILEC) local telco, which already supports an array of services beyond POTS. Then in 4.3.3, the cable TV (cableco) infrastructure and so-called dark fiber plants of electric and other utilities are covered. Large or well-located agribusinesses may be able to connect directly to the backbone and transport networks (4.3.4), bypassing telco and cableco access levels entirely.
The physical plant or infrastructure includes conduit, junction boxes, manholes, MDFs, and various kinds of DCE including routers and switches. In a converging market, wireline distribution technologies can carry services beyond those offered by traditional monopolies associated with each infrastructure. For example, an urban or suburban agribusiness could easily use the telco "plant" to carry its data and Internet traffic, the cableco plant to carry telephone, or even the electric utility's infrastructure to carry data traffic. Convergence means that every infrastructure will be able to carry every kind of traffic. In all cases, the agribusiness needs to buy or lease DCE, DTE, and conduit.
The skeleton of any wireline communications network is conduit. There are three chief kinds of conduit in common use: two-wire (and four-wire) twisted pair copper loops, coaxial cable, and fiber optic cable. Within each type, there is considerable variability in the speed, capacity, and distance supported by various sub-categories. While many QOS metrics are constrained by physical properties of conduit, new technologies are constantly being introduced that can squeeze faster speeds from existing conduit. Additionally, compression technologies and new conduit categories can extend distance, throughput, bit rates, and data rates.
Figure 4-23 repeats the QOS reference model, showing again the three parts of the communication path between two points to reveal the need for specialized wireline technologies and conduit.
Conduit is the only topic here in section 4.3 that bridges all three levels shown in Figure 4-23. First, the customer premises side of the demarcation point has conduit for the inside or local network at the agribusiness. Second, there is conduit in the access level between the agribusiness and the carrier's POP. Third, there is conduit within the transport level (carrier network).
Since specialized conduit exists at each level, it is easy to confused the copper twisted pair cable used in the telephone access network with the copper twisted pair used as customer premise conduit for computer LANs. Furthermore, no level uses a single kind of conduit alone. For example, the transport level may transverse thousands of miles and several carrier networks in the case of long distance telephony. When a carrier other than the ILEC provides a communications path at the transport level, there is a handoff point from the ILEC (which provides the access loop) to the other carrier (such as an ALEC, ISP, or IXC). The handoff point may be at the local POP or inside the ILEC's transport network. Hence, the transport level can consist of more than one carrier's transport network. The Internet backbone, for example, is a network of transport networks.
The first way wireline transmission is accomplished is by insulated copper conduit, either two-wire twisted pair or four-wire twisted pair (Quad). Both are used in the access layer to traverse the last mile between a subscriber's premises and the location where the carrier's network begins, the POP in Figure 4-23. Many wireline services in addition to POTS can use the copper wire plant built by the ILEC to connect to a local wire center, CO, or POP owned by the ILEC.
Competitors of ILECs such as (ALECs) use the ILEC's local loop to transmit communications to their own POP switch or to a POP co-located at the ILEC's CO. Long-distance carriers (IXCs) also use the ILEC's local loop so customers can access IXC networks. Facilities-based ALECs and IXCs use their own transport network for PSTN traffic. Non-facilities based ALECs and IXCs lease transport networks from other firms. For non-telephony services such as Internet and private data networking, the ILEC's local copper loop connects to a POP owned by an ISP (or other carrier) or to a server co-located at the ILEC's CO. Internet and data traffic normally travel over a transport network that is separate from the PSTN.
Twisted pair may carry analog or digital signals. The bandwidth (capacity) of twisted pair depends on length (distance traveled), category, and wire gauge. The gauge of the wire refers to its thickness, with a 24-26 gauge being the typical used for telephone. Ordinary telephone twisted pair can achieve a maximum data rate of four Mbps for a distance of up to ten kilometers (6.2 miles). Category 5 or 6 UTP (Unshielded Twisted Pair) copper used for LANs on customer premises can achieve much higher data rates, but for considerably shorter distances. Wiring standards are defined for the six categories of twisted pair in Table 4-5.
Sources: Tower, 1999, Sheldon, 1998.
In reading Table 4-5, realize that each row in the table supports the features of the category above it. Over sixty percent of U.S. buildings have Category 5 as their local conduit. Category 2 is typically found on the access level only, while categories 3 and above are strictly for use on customer premises.
Two-wire and four-wire twisted pair (quad) may differ in several additional ways. Quad telephony wiring is often thought of as the extra wire pair that accompanies the regular telephone two wire pair going into homes so that an additional line may be added. Quad twisted wire is also used for long analog loop distances due to its easy use with amplifiers, on customer premises, or in the digital access loops to support high-speed services such as HDSL (High Data rate DSL) [Paradyne, 1999, p. 23; Schlegel, 1999]. Copper wire is used for both baseband and carrierband transmission technologies on customer premises and in the access level.
Coaxial cable is another kind of wireline conduit. A hollow outer cylindrical conductor of solid copper encloses a single inner wire conductor. The cable can be between one and one-half inch in diameter. Coaxial cable (coax, for short) has a bandwidth of up to one GHz that can theoretically support data rates of up to 400-500 Mbps over longer distances than twisted pair.
At one time, coax (able to support from 600 to 10,800 voice circuits per cable) was regularly used in the telephone transport level. Now, while coax is still used in the telephone transport network leading out of rural areas, it is most likely to be found on customer premises. Coax is used intensively at the access level by cable TV carriers to carry video, voice, and data. Coaxial cable is used to transmit data, voice, and video using baseband and broadband transmission technologies. Coax can transmit either digital or analog signals. Coaxial cable is less likely to have cross talk or interference than twisted pair. Coax can be used at higher frequencies and data rates than twisted pair can as well. For example, the L5 CCITT coax specification calls for an analog cable (with operating frequency of 3.12 to 60.6 MHz) capable of carrying 10,800 voice channels. However, the T-4 digital designation can carry 4,032 voice channels with a data rate of 274.176 Mbps [Maguire, 1997, module 5, p. 27].
To transmit digital signals effectively over longer distances with coax, repeaters are needed. A data rate of up to 800 Mbps can be achieved with repeaters spaced every five thousand feet. The three main types of coaxial cable are RG-58 A/U (Thinnet, stranded wire core), RG-58/U (Thinnet, solid wire core), RG-59 (Thicknet, cable TV and broadband), and RG-62 (ArcNet). CATV cable can be further categorized by format: subsplit, midsplit, highsplit, or dual. Each of these four formats is 75 Ohm coax with 75 Ohm terminators. Bandwidth for subsplit ranges from 5-30 MHz (inbound) and 54 to 500 MHz outbound up to more recently deployed dual with a 40 to 400 MHz inbound and 40 to 400 MHz outbound bandwidth [Maguire, 1997, module 6].
Cable types cannot be mixed on a network without intermediate devices. Through the use of "resistance to ground", coaxial cable is often called a balanced transmission line because the conductors have equal resistance per unit length and equal capacitance and inductance between each conductor and ground. Additional advantages of coax are that it is flexible and easy to install and provides better resistance to electronic interference than twisted pair. Disadvantages include the cost (more expensive than UTP), the difficulty in changing configurations, and the requirement for repeaters to cover longer distances. Thinnet is generally not suitable for use between buildings. Both coax and copper twisted pair can be tapped, presenting something of a security risk [Tower, 1999].
Fiber optic cable offers very high bandwidth, measured in THz (trillions of Hertz), so that the entire usable RF spectrum can be carried on a single strand. Fiber optic cable is predominately used at the transport level where efficient, bulk transmission is a necessity, though hybrid fiber-coax (HFC) is now in the access level in some cases. The maximum length of a fiber optic segment can range from around twenty miles to several hundred miles before a repeater is needed.
Fiber optic cable and the necessary transmitters and receivers are costly. Six-fiber cables cost (in materials alone) at least $5,800 for two miles and $29 thousand for ten miles, while 48-fiber cables cost $16,360 for two miles and $81,840 for ten miles. The materials cost of fiber optic cable is small compared to installation, labor, and the price of fiber optic transmitters and receivers needed to establish communications [IEC, 1999, p. 6]. Multimode fiber used for data communications costs $0.70 per foot, while single mode fiber (used more in video) costs from $2.80 per foot and up [Tower, 1999].
Optical fiber is a thin, typically flexible conduit (also called waveguide) that conducts light rays using silicon glass or plastic fibers through its core. These fibers are clad with special glass or plastic coatings (cladding) with a layered plastic jacket designed to prevent moisture, crushing, or other hazards from breaking the internal fibers. Strands pass signals in only one direction so separate strands are needed for sending and receiving. Messages are converted into optical signals (photons), which are transmitted (flashed) down the optical fiber by a laser or light-emitting diode. A photo-detector on the other end receives the optical signals whereupon they are converted back into standard digital signals. Table 4-6 shows the three general types of optical fiber.
Source: CORD, 1974, 2000, Tower, 1999.
Some kinds of multimode fiber (such as step index) are used for shorter, local installations, while graded index fiber is used for longer distances. Multimode fiber has reflective walls that move light pulses through the fiber to the receiver, allowing the refraction of multiple light paths through the fiber center core. In addition to the 100/200 core-cladding sizes, multi-mode is available in 62.5/125 (EIA/TIA standard), 100/140, 85/125, and 50/125. The 50/125 multimode is a long distance graded index fiber with the greatest carrying capacity. However, it requires extreme precision during splicing.
NA (Numerical Aperture) is a measure of the light gathering power of fiber. NA refers to an imaginary cone that defines the acceptance region of the core or the ability to accept and carry rays of light. With single mode fiber, light is guided down the center of a narrow core much more precisely (over a single mode or path), with a tighter NA than multimode. Single mode fiber needs precision when connecting and splicing it to the laser source, but the power requirement is lower. Single mode is far more costly than multimode fiber, but is able to support next generation technologies such as SONET OC-12 (622 Mbps) data rates because of the absence of multimodal dispersion.
Signals on fiber-optic cable experience considerably less attenuation while immune to capacitance and electrical interference. For these reasons, fiber greatly surpasses the transmission distances of both twisted pair and coax. In addition, fiber-optic cable is more secure than copper cable since any external tap is easily detected by a reduction in signal strength. The final column in Table 4-6 is "the distance-bandwidth product, expressed in MHz-km. This gives the product of the maximum data-transmission rate and the maximum distance between repeaters. It is a common figure of merit used to characterize system performance" [CORD, 2000, Module 8].
Advantages of fiber optic cable include transmission rates of 100 Mbps and up over strands hundreds of miles long, better security, and the lack of electrical interference. Furthermore, fiber optic is considered to be more secure than coax or twisted pair. Disadvantages of fiber include the high cost, the difficulty in working with it, and need for specialized tools and training. Fiber can be hard to work with because it is inflexible and bulky. Fiber is often used in backbones between buildings and in Token Ring networks. Telcos have deployed it widely to replace coax long-distance trunk lines in the transport level. Cablecos that offer digital TV, high-speed Internet, and telephony use fiber connections to individual homes and businesses.
DWDM (Dense Wave Division Multiplexing) has exponentially expanded the capacity of existing fiber optic networks. Under DWDM, multiple signals are multiplexed onto separate wavelengths of light carried by a single fiber strand. According to the communications research firm Telegeography, all 81.8 billion minutes (over 56 million days worth) of traffic that was carried on the world's telephone network during 1997 could be sent over a single fiber of the latest-generation fiber optic systems in just eleven days. DWDM can carry up to 25 Tbps (tera, trillion bps).
Guided media such as copper wire, coaxial cable, and fiber optics have five chief physical properties that can vary the capacity and quality of hypercommunications, especially in rural areas where wire distances are longer. These properties have been alluded to in context, but it may help to describe them further. In every case, fiber optic cable is superior to coax, which is superior to twisted pair.
The first physical property of wire is attenuation, a loss of signal strength or amplitude that occurs over distance as the signal passes down conduit. According to Sheldon "attenuation is measured in dB (decibels) of signal loss. For every 3dB of signal loss, a signal loses 50 percent of its remaining strength" [Sheldon, 1998, p. 995]. At higher frequencies, skin effect, inductance, and capacitance cause attenuation to increase [RW Cabling Sciences, 1998]. Repeaters (digital), amplifiers (analog), and other devices are used to extend the distance limits caused by attenuation.
The second physical property of guided media is capacitance. Capacitance is a measure of stored electrical charge in a cable or wire. Stored charges can distort transmissions by changing the shape of the signal. Too thick or too closely bundled wires are especially likely to contribute to capacitance. At higher frequencies, capacitance becomes a greater problem, leading to increased attenuation.
The third physical property of guided media is impedance or delay distortion. The impedance value of wire or cable can be measured in ohms. Impedance is a combination of the inductance, electrical resistance, and capacitance of a transmission line. Impedance arises because the propagation velocity is a function of frequency so that only part of the signal arrives at one time at the receiver and part at another time. Impedance is a measure of the opposition offered by the wire to the flow of the carrier wave. Shortening the transmission path distance or lowering the frequency can help reduce impedance. Impedance can be a major problem for digital systems (inter-symbol interference). Abrupt changes in impedance (called discontinuities) distort signals, causing network problems [RW Cable Science, 1998].
The fourth physical property of guided media is noise, an unwanted signal that interferes with, changes, or blocks the transmission and reception of a desired signal. Several kinds of noise can occur, both predictable and unpredictable. Thermal noise occurs across all frequencies. Intermodulation noise happens when frequencies and multiples of their harmonics interfere with the original signal frequency. Impulse (or background) noises include random noises, often of large strength, such as lightning, motors, and elevators. Impulse noise is also generated by adjacent lines and transmitters. Ambient noise can be caused by motors, electronic equipment, nearby radio equipment, electric lines, and fluorescent lights.
The fifth physical property of guided media is inductance or cross talk. Cross talk is a special form of interference (noise) that results from an electromagnetic coupling of one wire with another [REA, 1751-H, 403, p. 9-4]. The number of twists made in a length of wire is one way copper can limit cross talk. Cross talk may be at the near end such as NEXT (Near End Cross Talk) or at a far end such as FEXT (Far End Cross Talk). Shielded twisted pair (STP) is a special kind of copper wire (with aluminum wraps or a copper braided jacket around insulated conductors) used especially in business settings to reduce inductance and crosstalk. It can support transmission over greater distances than unshielded twisted pair can.
Table 4-7 summarizes the general upper theoretical limits of the most frequently used forms of wireline conduit. However, because there are many variants within each of the three main types the table is only a rough guide.
Fiber optic cable is the overwhelming favorite for the transport level and is increasingly used in the access level. Copper twisted pair is still the most frequently used conduit at the access level because of the high cost of replacing the enormous amount that is already present in the telephone infrastructure. Coaxial cable predominates at the customer premise level for data connections, with copper still occupying the leading position for telephone connections. Hybrid fiber-coax and fiber-copper access conduit combinations are becoming increasingly common as telephone companies and cable TV companies compete to modernize their infrastructures so as to offer voice, data, video, Internet, and other high-speed services to residences and small businesses.
4.3.2 The Telephone Infrastructure
See also Technical Characteristics of the PSTN (3.3), Rural Wireline Infrastructure History (5.3.1), Hypercommunication Boundaries (6.1).
The telephone plant has evolved from an analog, all-copper conduit system designed to carry voice calls alone to a mixture of digital and analog circuits that carries voice and data traffic over a mix copper wire and fiber optic cable. In the telephone plant, copper conduit is usually found on customer premises and over the access level (especially the local loop). According to Paradyne Corporation, over 95 percent of local access loops are two-wire twisted wire supporting POTS [Paradyne, 1999, p. 5]. Multi-strand cables of up to 500 twisted-wire pairs may be used over the last hop of the access level. The transport level relies almost exclusively on fiber optic backbones and microwave or satellite paths.
AT&T and other telephone company research programs have been behind the invention (and in many cases, the development) of technologies that have made high-speed wireline hypercommunications possible. Digitization resulted in reduced bulk transport costs and better call processing in spite of the fact that digital voice requires more bandwidth than analog does. An important rule-of-thumb that prevailed for years (in the transport network), was that 8kHz was required to carry a 56-64 kbps digitized voice signal using PCM on the same copper conduit that occupied 3.1 kHz on analog lines. New speech codecs are available to reduce the bandwidth requirements, but there is a tradeoff in quality.
While the evolution from analog to digital and from copper to fiber have changed the telephone transport network, the local loop (the access network) typically relies on copper and has remained analog. The copper access network has been designed to handle voice telephone traffic in a narrow bandwidth of approximately 3Hz (see Figure 4-5). As was discussed in the V.90 modem example, under the best conditions (low line noise and subscriber-CO distance of fewer than 30,000 feet), the highest possible data rates would be 33.6 kbps upstream and 56 kbps downstream. In rural areas, (and indeed in many urban and suburban ones), line conditions may not reach anywhere near these capacities.
However, copper wire itself is not responsible for the lack of capacity. Copper wires can be used to extend high speed hypercommunication services thanks to several technologies such as ISDN, DSL, and DS1 (T-1). To understand how these technologies work, it is necessary to examine how the telephone transport and access networks have evolved.
Traditionally, the local exchange was the foundation of a five level hierarchy of exchanges built when AT&T was the national telephone monopoly. While large areas of Florida were served by independent telephone companies before the AT&T breakup the AT&T exchange hierarchy was followed (even by the independents) as the PSTN evolved from analog to digital and copper to fiber. Table 4-8 shows the exchange classes in the hierarchy.
The first stage in the dual evolution from analog to digital and copper to fiber came at the Class 1 level (farthest from subscribers), when traffic between regional centers was digitized and sent over high-speed microwave or satellite links. Then, traffic from sectional to regional centers became digitized and sent over microwave links. By the early 1980's, primary to sectional center traffic was also digitized and sent over microwave links, a technological change that allowed firms such as MCI and Sprint to begin limited competition with AT&T for long-distance customers. Regional centers began to be equipped with fiber optics in the mid-to-late 1980's so they could connect with other regional and sectional centers at record speed [Oslin, 1992]. Coaxial cable, specially conditioned copper loops and, finally, fiber optic cables were used to connect class four and class five exchange offices.
After the breakup of AT&T, developments in fiber optic and digital transmission technologies allowed telephone companies to operate digital transport networks. However, in many rural areas, copper or coax transport conduit was still used, even while bulk traffic became digital rather than remaining analog. This was because the fixed cost of upgrading copper transport to fiber could not be offset by revenues in low-density areas. In many areas of Florida, per subscriber revenues in sparsely settled areas are lower than in urban areas, although costs of service are higher.
However, in suburban areas and the urban fringes, it became generally apparent that not all of the local loop would have to be changed from copper to fiber in order to support certain telco carrier services. One option that has been widely deployed by BellSouth in Florida is the DLC (Digital Loop Carrier) which multiplexes many circuits onto a few copper wires or a fiber optic strand. For example, the Northern Telecom Remote Switching Center-S can consolidate the traffic of over 10,000 telephone lines onto 16 T-1 trunks (48 wires) [Nortel, 1998, p. 69]. This allows many traditional CO services to be delivered by telephone companies up to 150 miles away from the subscriber.
By using Remote Access Vehicles (RAVs) and DLCs (Digital Loop Carriers), enhanced telephony services are more available in rural Florida. RAVs are intermediate access level nodes where copper runs to the subscriber and signals are aggregated and back-hauled via fiber optic or enhanced copper from the RAV to the carrier POP. RAVs are especially important in rural areas where the distance from the subscriber to carrier facility exceeds 18,000 feet. [Nortel, Telephony 101, 1998, p. 68].
Hence, as Figure 4-24 reveals even though most ILEC COs cover a small geographic fraction of the state's total area, ILECs or competing ALECs could offer most central office traditional telephone services virtually anywhere in the state. However, the same is not true for most kinds of high-speed access provided through the telephone plant using the resulting copper-fiber mix.
In the figure, dots represent an 18,000-foot radius surrounding the location of all COs and wire centers RAVs in Florida as of July 1999. New digital loop technologies allow COs to be served from facilities located hundreds of miles away, meaning that competing traditional telephone service providers would not need to build costly plants all across the state.
However, RAVs (also called DLCs or Digital Loop Carriers) and new switch technologies do not change the basic physical laws governing copper wire regarding enhanced high-speed services (such as data and Internet). The extension of high-speed services requires subscribers be within 18,000 to 35,000 feet of the serving central office or RAV. Many telephone subscribers are outside the 18,000-foot limit. While many RAVs were deployed to provide enhancements to traditional telephone service, only a fraction can be modified to bring high-speed services closer to subscribers. Figure 4-25 focuses on how RAVs represent a recent evolutionary stage in telephone network access.
In Figure 4-25, thicker lines represent trunks and feeder fiber optic cables with multi-gigabit capacities among the central offices. The figure compares CO-1 (non-RAV) at the top with CO-2 (with RAV) shown on the bottom. Features of comparison are numbered. In the CO-1 case, the office building (1) is able to have high-speed T-1 services, as is the farm labeled (2). However, farms labeled (3) are too distant to receive high-speed service. The residences near CO-1 cannot get high-speed services because of limitations in the ratio of high-speed to narrowband copper.
CO-2 has an RAV (and DLC) able to serve more distant areas with more than basic telephone service. With the RAV, farm (2) is able to increase the speed of high-speed service since it is now closer to the fiber to CO connection. Furthermore, farms (3) are now able to access high-speed service. Indeed, as the area grows, new business parks, farms, and homes can be served by high-speed technologies. This is only true if RAV-DLC equipment is compatible with high-speed services. Over half of RAV-DLC equipment deployed by BellSouth in Florida cannot support such services as DSL.
Table 4-9 shows some of the technologies that use the telephone plant to deliver hypercommunication services in Florida. These technologies are described in context in the enhanced telecommunications, private data networking, and Internet sections (4.7-4.9).
Sources: George (1998), Schlegel (1999), Paradyne (1999), FitzGerald and Dennis (1999), REA "Digital Transmission Fundamentals", BellSouth (2000).
The first two rows describe modem technologies, mostly used to allow analog copper lines to transmit Internet and data. It is also possible to use "channel bonding" to combine modems on separate telephone lines (at both ends of a connection) into a single channel to obtain higher data rates. However, analog telephone lines were designed only to accommodate voice conversations over bandwidths that rarely exceed 3.1 kHz.
If bandpass filters and other intermediate equipment (bridged taps, amplifiers, and loading coils) are removed from the lines (a process known as line conditioning), copper is capable of carrying data rates far in excess of 56 kbps modems. The usable bandwidth can rise above 1 MHz, even at distances of two to four miles. However, the availability of this greater bandwidth depends on how close subscribers are to the CO (or suitable RAV) because attenuation and noise increase with distance. Additionally, higher frequencies are inherently more sensitive to noise and interference.
Some of the access technologies shown require individual line conditioning so that only a subset of subscribers along a certain path to the CO will ever be able to obtain high-speed connections. Other copper access technologies require that every line along the entire path to the CO be conditioned so that high-speed services are available to most of the subscribers.
The first digital technology in Table 4-9, ISDN (Integrated Services Digital Network), is both a service and a technology. ISDN was meant as a circuit-switched standard that would overcome problems in the PSTN by creating a way to handle voice, data, and video traffic. There are two kinds of ISDN, ISDN-BRI (Basic Rate Interface) and ISDN-PRI (Primary Rate Interface). ISDN-BRI uses two copper wires and two 64 kbps B channels (that carry communication) and one 16 kbps D-channel (used for signaling or overhead). Both kinds of ISDN operate over the PSTN so both data and voice traffic may pass through the telephone CO, switches, and transport network.
In the late 1980's, ISDN-BRI was expected to replace modems as the copper access technology of choice. This has not ended up coming true, possibly because ISDN connection pricing was metered (per minute). Since then, while ISDN-BRI rates have fallen in many markets, superior copper loop technologies (such as DSL) have become available. ISDN changes asynchronous computer communication into synchronous data streams so computers can use standard COM ports along with a V.120 terminal adapter to communicate. Terminal adapters can tell the difference between data traffic and telephone calls so one channel of the BRI connection can be used for data or Internet while the other B channel is used for telephone calls. ISDN-PRI offers 23 64 kbps B channels and one 64 kbps D channel. Like ISDN-BRI, ISDN-PRI uses telephone switches. Both kinds of ISDN require call set-up and dial access time. Both types of ISDN are given further coverage in 4.7.4.
Another kind of technology that is sometimes called ISDN is B-ISDN (Broadband ISDN). However, B-ISDN is not a circuit-switched ISDN technology but is a protocol for cell-switched ATM (Asynchronous Transfer Mode) technology that operates at speeds of 45 Mbps to 600 Mbps. A close relative of ATM, frame relay, is a frame-switched technology that operates at speeds of 64 kbps to 1.544 Mbps and above. Neither frame relay nor ATM is switched through the PSTN, though they may use copper or fiber from a telephone company (ALEC or ILEC) in the access level. Typically, however, if ATM is the transmission technology, copper can only be used for 300 yards or so on the CPE side. In such a case, access must be by fiber because even CAT 5 UTP cannot handle 155 Mbps longer than that distance [Thorne, 1997]. These technologies (associated with private data networking, but able to support voice) are covered in more detail in 4.8.
The next access technology in Table 4-9 is X-DSL, the first of several dedicated circuit technologies. DSL stands for digital subscriber line and X stands for the over ten different varieties of DSL such as (ADSL, DSL-G-Lite, RADSL). These specific forms of DSL will be discussed in section 4.7.3 with other dedicated circuits. Provision of DSL over the telephone plant requires the absence of bridged taps and loading coils [Paradyne, 1999]. Bridged taps (open ends) are used in many areas so that future telephone subscribers will have lines ready to tap into when new homes or businesses are built in areas where future growth is expected. Loading coils extend the reach of copper in sparsely populated areas on loops over 18,000 feet that do not have RAVs. More than twenty percent of all local loops have loading coils [George, 1998]. Additionally, many DSL varieties cannot be provided over DLCs and RAV equipment.
Also explored in 4.7.3 is DS-0, fractional T, T-1, and T-3 technologies. These are the most common leased or dedicated digital circuit technology in the United States [Tower, 1999]. One use of Ts is to carry multiple voice channels over a single twisted pair or a quad connection. Multiplexing techniques allow T-1s to carry 24 voice channels (each a 64 kbps TDM channel), while T-3s carry 672 voice channels.
Ts may be leased from ILECs or ALECs to access the PSTN for local telephone service (local T). Ts also are leased from an IXC for long-distance service or are leased from an ILEC, ALEC, IXC, or ISP to link corporate data networks nearby or overseas. Ts are also leased from the ILEC or an ALEC and terminated at an ISP for Internet access. T-1s require conditioned lines so that when used with certain signal technologies, repeaters are necessary to regenerate signals every 3,000 to 6,000 feet, with a distance of less than 3,000 feet needed from the demarcation mark and the first repeater [Paradyne, 1999, p. 6]. Newer Ts use a HDSL signaling scheme using quad copper for the full 1.544 Mbps [Sheldon, 1999, p. 942]. HDSL allows two to four times the distance of an conditioned T-1 (AMI) circuit.
T-3s carry DS-3 signals over copper or copper-coax as well, though they are now more likely to be carried by fiber optic cable. A T-3 is the equivalent of 28 T-1s with a total of 672 voice channels and a data rate of 44.736 Mbps. T-3s serve similar purposes as T-1s including voice, data, and Internet access. T-3s are typically leased from ILECs and ALECs, but when Internet is involved, ISPs are also involved as discussed in 4.9.1. Presently, T-3s are not often used as access technologies but are usually used as transport or backbone circuits (mentioned in 4.9.4). However, as agribusinesses require increasingly greater network capacity (especially as Internet, data networking, and Internet access use a single, converged hypercommunications pipeline), T-3s will become more common as access technologies.
4.3.3 Cable TV Infrastructure and Dark Fiber Networks
The telephone infrastructure is not the only way for agribusinesses to access hypercommunication networks. The cable TV plant and dark fiber networks of other potential hypercommunication carriers are additional ways. Cablecos now are focusing on introducing a variety of hypercommunications services primarily for residential suburban Florida. The list of cable services has grown from basic and extended choice television into pay television, audio programming, local telephone service, and high-speed cable modem Internet access.
Electric utilities also have rights-of-way and fiber capabilities over which hypercommunication services are offered. Electric utility fiber is often called "dark fiber" because of the enormous unused capacity in electric company fiber optic networks. There are estimates that as much as 50% of all fiber networks are overbuilt and, hence, dark [Gong and Srinagesh, 1997]. Often, this capacity is sold wholesale to telecommunications carriers. In some cases, such as GRU (Gainesville Regional Utilities) and TECO (Tampa Electric) for instance, electric companies offer data and other hypercommunication services directly to Florida businesses. In some areas, railroads and other firms also have dark fiber capacity.
This section reviews the ability of the cable TV infrastructure to provide hypercommunications access to agribusinesses in Florida. However, generalizations are difficult because of considerable technological variation offered by cablecos in various parts of Florida. In part, the variations in infrastructure are due to the size of cablecos. Cable systems range from with millions of subscribers (such as MediaOne and TCI, both owned by AT&T, and @Home, owned by Time-Warner-AOL) to franchises owned by a single person that serve a few hundred households.
Several observations are in order first. First, cablecos have traditionally focused on serving the residential market, though this is somewhat less important to agribusinesses since many are residences (farms), or are operated in or near residential areas. Second, not all cable systems have upgraded their equipment to provide data and voice services. Third, many rural areas are not served at all by cable television and may never be. Small town cable service is usually available, but conditions vary as to how far out of town the cable travels if at all. Cable TV is a monopoly franchised by cities, counties, and other geographic areas such as subdivisions, with specific service boundaries. Traditionally, state and federal regulations have not equated cable service with telephone service in terms of being a social necessity. Thus, only in localized franchise areas has an attempt been made to provide universal service to all locations.
The main reason cableco networks have become competitive with telco networks is that cablecos have lower costs of upgrading their infrastructure from coax to fiber than telcos do upgrading from copper to fiber. Furthermore, cablecos serve local franchises while ILECs are required to serve all reasonable locations in their larger coverage areas. Estimates differ dramatically on the cost per mile of replacing the local copper telephone loop with fiber.
Upgrading analog copper to FTTH (Fiber to the Home) systems capable of supporting two-way broadband services costs an average of $1,500 to $2,000 per urban ILEC subscriber [Egan, 1996, p. 152]. However, since BellSouth has a $10.4 billion wireline network infrastructure investment now in Florida and an $800 million annual construction budget in the state already, it cannot afford to completely replace access copper with fiber [Ackerman, 2000]. Cable TV providers often have significant cost advantages over ILECs when it comes to upgrading local loops with hybrid copper fiber installations known variously as FTTH, FTTC (Fiber to the Curb), and FTTN (Fiber to the Neighborhood). While expensive, cablecos can upgrade their local loops to fiber at one-fifth the cost telcos can. It will take Cox Cable five years and $400 million to rebuild the 60,000 miles of Phoenix, Arizona's cable system (which passes 1.12 million homes), at $357 per subscriber [Woodrow, 1999].
Figure 4-26 shows the set up for four typical cableco infrastructures. Note on the left the head end. This is similar to the telephone company CO in that it is a physical office where cable company equipment is located and where the cable runs for an area converge. However, the head end is linked to two transport networks (the PSTN and the Internet) and is connected to satellite equipment so TV signals may be downloaded. In digital cable systems, the head end also is equipped for digital cable TV signaling.
Regardless of which distribution scheme is used, the head end must be equipped to support Internet and PSTN as depicted. From the head end, a fiber trunk (with power sources required) leads to a node that serves from 500 to 5000 locations. From the node, there are four distribution schemes shown.
The first distribution scheme serves subscribers (s) through distribution coax (top), with amplifiers required to boost the signal enough that it reaches the tap point, from which lower capacity coax is dropped to each home or business. The first distribution scheme may be over 350 MHz coax so that a maximum of 58 TV channels may be received along with download only Internet access. Upload Internet access is via modem over standard analog telephone lines, with downloads accomplished with a one way internal cable modem such as a Surf Board.
The second distribution scheme (bottom, Figure 4-26) is FTTN (Fiber to the Neighborhood). Two-way broadband services could be provided to these subscribers, with fiber optic cable running to a neighborhood pedestal where coax cables distribute amplified signals to homes. A lower capacity (un-amplified) coax leads from the tap point to the service connector(s) within each subscriber's location. FTTN supports two-way services including Internet, enhanced telephony, and digital cable. Over 100 TV channels may be received via FTTN as well.
The third and fourth distribution schemes, FTTC (Fiber to the Curb or Fiber to the Cabinet for small businesses) and FTTH (Fiber to the Home) are shown on the right of Figure 4-26. FTTC (on the bottom right) uses fiber optic cable to reach curbside tap points and then uses un-amplified drop coax from there to the service point inside each location. Under FTTH (top right), low capacity fiber is run from a neighborhood pedestal to each home. Both FTTH and FTTC can support two-way broadband services including digital cable TV, VOD (Video on Demand), data networking, Internet, and enhanced telephony. Over 120 TV channels may be received using FTTC, with a specialized set top box capable of preprogramming and downloading pay-per-view VOD and pay channels as well as displaying specialized system information.
In FTTN and FTTC, data connections are supported with ordinary 10BaseT-Ethernet cards and connections similar to an RJ45 lead from CPE DTE to on-premise coax and then to tap points. A two-way cable modem (such as the LAN City LANtastic or Toshiba PCX1100) is used with FTTN and FTTC systems. Current FTTH connections use coax inside the home, but fiber to desktop CPE are being introduced. Eventually, FTTH will support bit rates many times that of VDSL (up to 200 Mbps) [Schlegel, 1999].
The capacity of many older analog coax cable systems is 350 MHz and below. However, the latest HFC (Hybrid Fiber Coaxial) voice, data, and TV systems run at 750 MHz and more. A 6 MHz downstream channel capable of 27-36 Mbps is shared among an average of 500 to 2,000 users per node [George, 1998, Table 2-1]. The upstream link of 2-5 Mbps is also shared with other users on the node. However, it would take a critical mass of over 200 simultaneous users per mode for a significant degradation of up or downstream data rates to be experienced with the broadband shared access approach [Reed, 1997].
Because of the shared access broadband signaling inherent in cableco access, it is often not considered a business-class service. Connections can support only up to five computers per location, and there are certain security concerns that require defensive cableco installation. Only a few telephone lines per customer can be supported because of the upstream limitations. However, FTTH applications are being developed that are expected to improve upstream capacity and make cable more attractive as a communications choice for business.
The availability of agribusiness cable access depends on several factors. First, in many areas, the infrastructure does not support even one-way cable modems. Service depends on the infrastructure level the cableco has implemented. Second, the more fiber is used, the more expensive intermediate DCE is required to support the network. FTTH requires that subscribers buy an expensive fiber optic transceiver known as an ONU (Optical Network Unit). With FTTC and FTTN, fiber optic equipment costs are spread among groups of subscribers. Third, telephony implementations are still in introductory stages so that service interruptions and other problems may be common in some areas. Finally, the ability to support the cost of the infrastructure depends vitally on per subscriber revenues. In low-density areas, costs per-subscriber may be too high to extend service and the cableco is rarely required to do so. However, cablecos tend to have better overall infrastructures than telcos for offering cheap, high-speed service in densely populated areas. Telcos have begun to offer cable TV and high-speed hybrid fiber coax services on a limited basis to compete with cablecos. Eventually, cablecos will support multi-telephone lines for small businesses.
Dark fiber networks provided by other carriers such as electric utilities and railroads have the same access and distribution problems that cablecos do. However, the electricity companies have an additional problem caused by the interference of electricity itself. Nortel and other companies have been experimenting with powerline networks where data can be carried within an office through electrical outlets at speeds of up to 1 Mbps, but a variety of technical problems remain. Currently, electric and other dark fiber carriers, tend to wholesale their fiber capacities to transport level carriers and customers, rather than attempt to establish access networks to smaller, local customers. Capacities can range from 45 Mbps to over 100 Mbps, though such speeds are available on a limited basis.
Before moving to transport wireline technologies, it is important to characterize the symmetry and data rates of the wireline access technologies covered thus far in 4.3. When the data rate differs between upload and download (as it does for DSL and V.90 modems), the technology is said to be asymmetric. Indeed, this can be true in some cases due to asymmetric bandwidth, when the upload capacity (in kHz, MHz, etc.) is smaller than the download capacity. The asymmetry is also linked to the use of line codes that take advantage of the fact that for applications such as the Internet, more bits are downloaded than uploaded.
Figure 4-27 compares the data rate symmetries for a voice line with V.90 modem, ISDN-BRI, xDSL, ADSL, T-1, and cable modems. The first column in Figure 4-27, an analog voice line, is asymmetric in that for the 56 kbps modem, download can occur at a rate up towards 56 kbps, while upload can only approach 33.6 kbps. Similarly, the two forms of DSL shown are asymmetric. However, ISDN-BRI, T-1, and ISDN-PRI (not shown, but same data rates as T-1) are symmetric. Cable refers to the data rate of cable modems (covered in 4.3.3).
4.3.4 Data and Voice Transport, Fiber Optic Backbones
Larger agribusinesses with advantageous locations may be able to gain direct access to the transport network without having to use the access level at all. While this solution is currently an expensive one, it provides the greatest bandwidth possible. Fiber optic connections may be available from ILECs, ALECs, IXCs, cablecos, ISPs, and electric companies. As fiber miles increase and as DWDM increases the capacity of existing fiber optic connections, prices are expected to fall while supply shifts right. However, the results are uncertain.
SONET (Synchronous Optical Network) is an ECSA (Exchange Carriers Standards Association) ANSI standard for fiber optic networks. SONET defines optical carrier (OC) transport standards and STS (Synchronous Transport Signaling) for a hierarchy of fiber optic networks, primarily above the physical layer [Nortel, 1996]. While SONET is covered in more detail in 4.7.3 (as a dedicated circuit technology) and in 4.8.4 (as a private data networking technology), two important features need discussion here.
First, SONET is an "on net" wireline technology that requires direct connection to a backbone network to support a full-range of hypercommunication services. Second, SONET is fast, specifying data rates of from 51.84 Mbps (STS-1, OC-1) to 9.953 Gbps (STS-192, OC-192). Clearly, with these kinds of speeds, present agribusinesses who require SONET "on net" fiber optic connections would be large indeed.
The smallest fiber optic backbone connection is a T-3, the equivalent of 28 T-1s with a total of 672 voice channels and a data rate of 44.736 Mbps. By comparison, an OC-192 has over two hundred times the capacity of a T-3, with a minimum number of voice channels near 150,000. Specialized ONUs, smart building wiring, and other equipment are also needed to support fiber optic access/transport.
4.4 Wireless Transmission Technologies
Wireless transmission technologies are particularly important for Florida agribusinesses for three chief reasons. First, over 48 million Americans have jobs that require that they be mobile much of the time [Zaatari, 1999, p. 135]. It is likely that this is especially true for the agribusiness sector because of the land-based and international aspects of agriculture. A second reason for the importance of wireless technologies is that wireless can offer cheaper and faster rural access to hypercommunications in unserved areas than wireline. While many promising wireless technologies are being introduced in urban Florida, the advantages of using wireless over wireline technologies have been strongly supported by studies concerning rural access to advanced hypercommunication networks [NTIA, Survey of Rural Information Infrastructure Technologies, 1995, p. 5-8, p. ix].
An understanding of wireless is important to agribusiness for a third reason: wireless technologies are the fastest-growing segment of telecommunications today. According to Core Exchange, Inc., almost eighty percent of U.S. business Internet users will use wireless devices to access data in 2001, up from three percent in the year 2000 [AIM Research Update, 3(8), 2000]. The popularity of wireless may be related to several factors including falling prices, rising QOS, and an array of new services and devices. For example, from December 1997 to January 1999, cellular prices fell at an annualized rate of 8.4 percent while local (wireline) services increased by 2.2 percent. The overall CPI increased by 1.9 percent in the same period [FCC, June 24, 1999, FCC 99-136, p. 22-23].
Wireless technologies are easily confused with the services they deliver. Generally, most of the specific service sub-markets described in 4.6 through 4.9 are available through either guided media (wireline) or unguided media (wireless). Wireless technologies specifically designed to provide a particular service (for example cellular telephone) are given more coverage in the section concerning that service's sub-market. For example, specific wireless technologies used to deliver Internet access (4.9) or enhanced telecommunications services (4.7) are discussed in the sections concerning those sub-markets, along with their wireline counterparts.
This section focuses on wireless transmission technologies in general, especially on the access level. The introduction to electromagnetic spectra in 4.4.1 provides a foundation to understanding wireless technologies. Some examples of wireless spectra (transmission media) include: infrared, laser, microwave, and radio. Each medium (along with variations within that medium) uses a different spectral range constrained by a set of properties that depend on the physics of the electromagnetic waves in that range. Terrestrial wireless technologies (4.4.2) use earth-based wireless equipment to beam communication signals to fixed, nomadic, or mobile users. Satellite wireless technologies (4.4.3) use space-based equipment to beam communication signals to fixed, nomadic, or mobile ground-based receivers. Finally, wireless QOS is addressed in 4.4.4.
The hurried reader may be able to understand the main wireless technologies by consulting Table 4-10 (terrestrial mobile technologies), Table 4-11 (terrestrial fixed technologies), and Table 4-12 (satellite fixed and mobile technologies). Table 4-14 summarizes the QOS concerns related to wireless technologies.
The ability of wireless technologies to provide access to services that previously could only be accessed via wireline technologies led Stone to conclude, "the once sharp division between wire and wireless has now closed. The markets have converged" [Stone, 1997, p. 155]. Nonetheless, there are two important divisions in wireless technologies. The first division depends on the type of transmitter (terrestrial or satellite). However, wireless technologies and services have a second division based on how much the user's location varies. Wireless users may be mobile, nomadic, or fixed. A mobile user is one who travels (often by car) from one point to another within a local coverage area or roams across the state, nation, or world. Nomadic users stay in one localized area of up to a few miles, but require the ability to roam (usually by foot) within that area. A fixed user uses wireless service at one permanent location so that receiving points are stationary, similar to wireline users [Weinhaus, Lagerwerff, Brown et al., 1999].
Importantly, wireless transmission over the last mile may connect to wireless or wireline CPE at the agribusiness. Similarly, wireline equipment over the last mile may connect to wireless or wireline equipment. Specifically, Figure 4-28 shows a wireless QOS reference model.
Figure 4-28 demonstrates that, like wireline, wireless transmission has three levels: local CPE, access level, and transport level. When the local level is wireless, a wireless LAN or a local nomadic technology (such as a cordless telephone) is being used. For example, LAN wiring may be replaced with infrared signals that create a wireless path from end server (stationary LAN) to nomadic or fixed client devices. When the access level is wireless, a wireless path from customer antenna to wireless POP (base station) is required. For example, the cellular telephone creates a wireless path from a mobile phone to a base station to replace the wireline local loop from subscriber to CO. Finally, if the carrier network is wireless, a wireless transport technology is used. Bulk transmissions of telephone calls commonly use microwave or satellite paths to replace fiber optic cables. As in the case of wireline transmission, wireless transmission technologies may be point-to-point or multipoint. Most applications of wireless technologies involve hybrid wireline-wireless networks because some levels use wireless technologies to establish wireless communication paths, while others rely on wireline technologies to create wireline communication links.
The role of DCE and DTE differs from the wireline case. Many wireless devices are both DTE and DCE, with DAC and ADC accomplished through circuitry in the transceiver. A receiver and a transmitter (transceiver) are needed when air is used instead of wire for two-way communications. To transmit successfully, a threshold signal power is needed, while to receive, some form of antenna is required. Wireless communications has two "power challenges". First, path loss occurs when an antenna transmits a signal over air. Unlike guided media, where the entire signal (less attenuation) is received on the other end, the electromagnetic waves of unguided media are scattered through the air on their trip to and from the receiver. Second, power constraints (such as battery life or interference limitations on signal wattage) may prevent devices from transmitting a strong signal [Committee on Evolution of Untethered Communications, NRC, 1997].
Before proceeding to the electromagnetic spectrum discussion in 4.4.1, a short explanation of the generations of wireless technologies puts path loss and power constraints in the appropriate technological context. Most sources argue that there are three generations of wireless technologies, with a fourth yet to be created [NSF, 1998; Zaatari, 1999]. According to Zaatari (1999), in the first generation, analog transmission predominated. AMPS (Advanced Mobile Phone Service) and early CDMA (Code-Division Multiple Access) are two mobile wireless technologies from this era. The CDPD (Cellular Digital Packet Data) standard was established to allow an operational bit rate of 19.2 kbps over the analog AMPS systems, though actual data rates are closer to 5-10 kbps [Telecommunications Engineering Centre, 2000]. Most fixed services (such as microwave relay or fixed satellite) were used only by telephone carriers or TV stations since the technology that supported them was expensive and few other uses existed for high-bandwidth links.
In the second generation, FDMA (Frequency-Division Multiple Access) was originated when digital cellular became a more common wireless service. It was the first of many wireless technologies that allowed multiple mobile users to share the same spectrum. The 200 kHz GSM (Groupe Speciale Mobile) radio band was opened (using a European technology), digital-only service was introduced in the 1900 GHz PCS band, and dual-mode analog cellular technologies began to operate near 800 MHz. Data rates rose to 9.6 kbps and 14.4 kbps.
A scarcity of spectrum limited the extent of the market, so most second generation wireless technologies concentrated on the efficient use of already assigned spectra by increasing capacities of existing (first generation) DCE. In the US, TDMA (Time-Division Multiple Access, IS-4 & IS-136) was developed allowing three times the traffic of its first generation predecessor, AMPS. The European TDMA standard and new iterations of GSM increased existing AMPS capacity by a factor of eight. Another US standard, CDMA (IS-95) allows up to ten times the AMPS capacities using existing spectra. Now, competing carriers use all three in the United States. Fixed and nomadic wireless technologies supported either limited common carrier offerings or private, high-speed digital TV and telephone carrier transport.
In the third generation, three competing standards (backward compatible with existing networks) increase mobile and nomadic data rates even further and allowed deployment of new services. These three are W-CDMA (Wideband CDMA), CDMA-2000, and UWC-136 (Universal Wireless Communications). Third generation wireless technologies will (or are already able to) support Internet access, worldwide roaming, and video conferencing in addition to enhanced wireless voice for mobile users [Zaatari, 1999, p. 133]. All three support mobile data rates of 384 kbps (extended range) and 2 Mbps (mobile, nomadic or fixed, local range). The three competing standards are assigned frequencies in the Broadband PCS and 2 GHz bands.
Another group of technologies of the third generation serve fixed wireless users, offering greater data rates. A promising source of wireless hypercommunication technologies came in the area of wireless "cable" and fixed broadband wireless access. Wireless cable is primarily comprised of MDSs (Microwave Distribution Systems) such as MMDS (Microwave Multipoint Distribution System) and LMDS (Local Multipoint Distribution System). Wireless cable was developed initially to supply cable TV programming to rural locations, thus allowing competition with wireline cablecos. MMDS is the wireless equivalent of broadband communications offering cable TV, Internet, basic telephony, and enhanced telecommunications services.
New technologies and devices have been developed recently to allow LMDS and MMDS to carry two-way data and multiple voice lines. Satellite DBS (Direct Broadcast Systems) offer high-speed Internet access. Wireless LAN technologies allow local CPE networks to function without wires so network users can move about a business? location with portable computers, telephones, and other devices.
In Figure 4-29 [Adapted from NSF, 1998], the four generations are depicted graphically, summarizing where current wireless transmission technologies fit compared to wireline and future wireless generations. The bottom x-axis depicts the transmission rate (operational bit rate), while the y-axis depicts the degree of mobility. To make comparison easier, at the very bottom (spanning all transmission rates), are boxes containing wireline conduit types.
The capstone of the third generation will almost certainly be high frequency, fixed wireless technologies that allow two-way access to a full range of hypercommunication services. Concentrated in the SHF (Super-High Frequency) and EHF (Extremely-High Frequency) bands from 3 GHz to 39 GHz third generation services include DEMS (Digital Electronic Message Service), Ka-band FSS-GSO satellite, Big LEO satellite, LMDS, and Winstar's 39 GHz WLL. Third generation fixed wireless technologies are beginning to provide high-speed services rivaling fiber optic data rates. Power, antenna, and tower relay technologies for some of these services are still under development.
However, the NSF Tetherless T-3 Workshop Report found that the third generation "vision of providing any service to any user at any time anywhere on earth will be only partially achieved by the current technology" [NSF, 1999, p. 2-13]. Hence, the development of a fourth generation of wireless technologies is envisioned as necessary for wireless to become a truly mobile, high-speed worldwide method of hypercommunications. Bold isoquant-like curves separate generations in Figure 4-29, with third generation technologies expected to evolve from their introductions in 2000 until the fourth generation begins, sometime near 2010.
4.4.1 Technical overview of electromagnetic spectra
Further clarification of the electromagnetic spectrum will help make the brief, technical introductions to terrestrial (4.4.2) and satellite (4.4.3) technologies more understandable. It is important to note that 4.4.2 and 4.4.3 build on the material presented here in 4.4.1. In this sub-section, the purpose is merely to outline the spectral properties of frequencies associated with wireless technologies. More details about each technology and the services it supports will be found in the sub-sections following.
Physicist James Clerk Maxwell addressed the Royal Society in 1876 with a revolutionary paper showing the existence of the electromagnetic spectrum. Maxwell found that "Electromagnetic fields could be propagated through space as well as conductors" and "light itself was electromagnetic radiation within a certain wavelength" [Stone, 1997, p. 131]. By 1894, English physicist Oliver Lodge came up with systems by which wireless signals were sent and received. However, he could not see any commercial applications. Guglielmo Marconi obtained the first radio patent in 1896 and demonstrated radio's applicability to navigation. Then in 1901, Marconi transmitted an "s" in Morse code from Cornwall, UK to Newfoundland. Marconi's success led to a scientific race to explain the results, ending in the postulation that layers of the ionosphere refracted short, medium, and long waves to earth [Stone, 1997]. Since that time, as scientists have created new wireless technologies, commercial application has waited for FCC assignment of frequencies and bandwidths for new services, for the perfection of DCE and DTE, and for the launch of new services.
Now, the electromagnetic spectrum consists of:
The range of frequencies of electromagnetic radiation from zero to infinity. . . . formerly divided into 26 alphabetically designated bands. The ITU formally recognizes 12 bands from 30 Hz to 3000 GHz. New bands, from 3 THz to 3000 THz, are under active consideration for recognition. [FCC, 1996, p. E-10]
Wavelengths of electromagnetic spectra are measured in meters, while frequencies are counted in Hertz. Based on both measures, groups of frequencies are categorized into bands. The frequency of a signal is a function of the wavelength of the carrier. If f is frequency in MHz and w is wavelength in meters, then w = 300/f and f = 300/w. The wavelength is important in determining the size of the antenna.
In wireless transmission, bandwidth has a broadcasting orientation similar to how radio and television channels are defined. Barden defines the bandwidth of a broadcast signal as: "Total width of a radio signal, varying from a few hundred Hz for CW to five or six MHz for TV" [Barden, 1987, p. 109]. In general, the wider the bandwidth, the greater the interference. The potential of wireless is especially apparent when the bandwidth of an analog telephone line (3.1 kHz) is compared to WLL upper-band services that reside at 39 GHz, over twelve thousand times larger.
However, the requirements for receiving and sending high-speed wireless hypercommunications traffic vary according to the frequency band, bandwidth, availability of inexpensive equipment, equipment power, antenna type, terrain, and other factors specific to the spectrum used by a particular device. Interference or noise may result from rain, lightning, wind, outer atmosphere conditions, sunspot activity, solar flares, mountains, ground clutter, soil conductivity, interference from other devices, and due to the quality of filtering mechanisms in receiving appliances.
In the US, the FCC allocates spectrum among several groups: scientific interests, governmental and military users, broadcasters, amateur and private operators, and common carriers. Figures 4-30 and 4-32 through 4-33 show some of the more important spectrum assignments resulting from recent FCC licensing, auctions, and legislation. Assignments are far more detailed than those shown [NTIA, 1997; FCC, 1996; Weinhaus, Lagerwerff, and Brown, 1999]. Some wireless services require the action of local governments as well to cross public and private rights-of-way or to compete with wireline cableco franchises [Matheson, 2000].
The lower frequencies are familiar since they contain AM radio, FM radio, and VHF and UHF TV bands. Indeed, most first generation and a good part of second generation wireless traffic is carried in the VHF (Very High Frequency) band and sub-microwave part of the UHF (Ultra High Frequency) band, the range shown in Figure 4-30. Beginning at the left, are the AM radio and tropical short-wave bands. The so-called tropical short-wave bands (below 3.950 MHz, wavelength 76 meters) are allocated to countries where interference-causing thunderstorm activity is common. These frequencies have less atmospheric noise due to lightning burst or static noise. Furthermore, since low-power transmitters can be used in those frequencies, radio signals can cover a greater area relatively inexpensively.
Moving to the right of Figure 4-30, other short-wave frequencies are found. It is possible to communicate via SSB (Single Side Band) radio in short-wave bands for thousands of miles to land or sea with specialized radio equipment. Motorola offers radios capable of sending data, voice messages, faxes, and e-mail using SSB paths. Equipment is expensive and interference can be great, so there is no guarantee traffic will get through to the destination.
The VHF band begins just above the CB (Citizen's Band) at 30 MHz (wavelength, 10 meters). Until 30 MHz, atmospheric noise is plentiful while attenuation is low, meaning short-wave radio signals can be transmitted thousands of miles. Atmospheric noise drops at 30 MHz, but from 30 to 300 MHz, cosmic noise occurs and terrestrial signals cannot be transmitted beyond 50 to 120 miles, depending on terrain. VHF-TV, FM radio, and Little LEO technologies are used to communicate over frequencies in this spectrum. Little LEO systems are used for paging and low-speed data transmissions such as for POS (Point-of-Sale) devices.
From 300 MHz and up, line-of-sight wave propagation occurs [Committee on Evolution of Untethered Communications, NRC, 1997]. While short-waves bounce off levels of the ionosphere to travel hundreds of miles, meter-long (UHF) waves and 0.3 meter microwaves normally do not go beyond the 60 km (37 miles), the line-of-sight from transmitter to receiver [NTIA, 1995]. As frequency rises, coverage distances shrink, though this phenomenon is more pronounced in mobile communications.
Figure 4-31 shows an example of the relationship between the coverage zone of a single transmission tower to an increase in frequency using a mobile radio simulation done by NTIA's engineering arm, ITS (Institute for Telecommunications Sciences) [NTIA, 98-349, 1998].
This physical relationship was behind the development of research on cellular and other mobile wireless coverage. To increase the coverage area for a given frequency, power must be boosted or antennas made taller or an interlocking network of base stations must be established (each with an overlapping coverage zones) or all three tactics taken. By using a combination of methods, the footprint of a particular wireless carrier (sum of individual coverage zones) can grow.
The FCC represents another limitation on coverage zone sizes and footprint areas because it regulates power levels and frequencies used by most wireless spectra. Specific boundaries are discussed in 6.1, but most frequency allocations are assigned on a market by market basis through open auctions or existing licensing regulations.
However, as frequency increases, signals cannot reach places they would have at lower frequencies even with the same signal power and antenna height. Hence, more towers are needed, more closely spaced together as frequencies rise. The irregular shape of the coverage zones in Figure 4-31 is due to terrain differences, water bodies, forested land, and other factors. With fixed services, special gain antennas can be aimed directly at the transmitter, resulting in larger coverage zones (at equal frequencies) than for mobile technologies.
Since satellite signals do not contend with (land-based) horizontal interference because line of sight is defined from outer space, a satellite can cover a vastly larger area than a terrestrial transmitter on the same frequency. However, satellite transmitters need more power to transmit while satellite dishes (designed as cones to increase the gain or signal power) have to be aimed at certain angles to pick up (and transmit) signals from and to outer space. In addition to frequency, antenna type, antenna direction, user mobility, and the specific terrestrial or wireless technology used affect the size of the coverage zone.
Most first generation and some early second generation wireless traffic takes place within the UHF (300 MHz) to 1 GHz (microwave) frequencies shown on the right side of Figure 4-30. Most prominent are UHF-TV, and two forms of cellular service IMTS (analog cellular) and CMRS (digital cellular). The SMR (Specialized Mobile Radio) bands use iDEN™ technology to provide a combination of radio and cellular communications to customers. Narrowband PCS telephones and pagers use this part of the spectrum as do fixed telemetry (remote sensing) devices. The BETRS (Basic Exchange Telephone Radio Service) rural radiophone service also is found here. BETRS offers small, remote communities the opportunity to use a radio transport path for telephone transport when wiring or cable is expensive compared to the subscriber base, impractical, or environmentally unsound. Around one hundred telephones on Dog Island, a small island on the Gulf coast near Carabelle, use this service to connect to the PSTN. ISDN-BRI is available through BETRS [FPSC, Docket 950814-TL, Order PSC-97-1196-FOF-TL, 1997].
Moving up, frequencies above 1 GHz have wavelengths so short they are too small to be called radio waves. Instead, these high frequencies above 1 GHz are called microwaves since they are one-millionth the wavelength of a 1 Hz wave. Figure 4-32 depicts part of the microwave spectrum, from 1 GHz microwaves (300 mm wavelength) up to 10 GHz (30 mm) in the SHF band. Later-second generation and third generation mobile and early third generation technologies that support fixed services operate in this spectral segment.
Two mobile satellite technologies, big LEO (Low Earth Orbit) and GSO (Geosynchronous Orbit), use the spectrum between 1 GHz and the 2.45 GHz frequency (used by microwave ovens), providing telephony and narrowband data communication. Also in this range are broadband PCS technologies, along with two other terrestrial technologies. First, two technologies that support newly auctioned fixed and mobile services at 2 GHz are located there, as is a third technology, MMDS (Microwave Multi-point Distribution System). These three terrestrial technologies are now (or soon will be) capable of providing data rates up to 2 Mbps in localized coverage areas and below 300-400 kbps in extended coverage areas. However, some argue that these promised speeds are higher than reality. For example, mobile users using PCS-1900 standard GPR (General Packet Radio) will have bit rates in extended areas of as low as 115 kbps because of the effect of motion on the radio path [Telecommunications Engineering Centre, 2000].
MSS (Mobile Satellite Service) technologies, fixed satellite, big LEO satellite, and additional point to multi-point terrestrial MMDS technologies have frequency allocations around 3 GHz. In the SHF band from 3 GHz to 10 GHz, several satellite technologies (GWCS or General Wireless Communications Service and other fixed satellite services) operate in frequencies near fixed WLAN (Wireless Local Area Network) technologies.
Figure 4-33 depicts the frequencies from 10 GHz to 59 GHz. From 10 GHz to 20 GHz a variety of satellite technologies share FCC-assigned commercial frequencies. These commercial frequency bands include broadband satellite Ku-band, DBS (Direct Broadcast Satellite), additional Big LEO frequencies, and several other fixed satellite assignments.
Enormous interest exists in development and introduction of services that would use fixed terrestrial technologies to provide services that would operate in the 24 GHz to 39 GHz (upperband) microwave frequencies. Broadband fixed wireless technologies in the upperband range include DEMS, LMDS, MMDS, and WLL. Because wavelengths are smaller, antennas are small enough (15 by 15 cm) that large, expensive rooftop antennas are unnecessary. The short wavelengths of the upperband range are not true line of sight systems because buildings or structures that might normally obstruct signals serve as passive or active repeaters since signals bounce off them or around them [Telecommunications Engineering Centre, 2000].
Hence, while a form of line of sight propagation exists (with smaller coverage zones) above 30 GHz in the EHF band from transmitter to receiver, customers need not resort to large, roof-mounted antennas [Committee on Evolution of Untethered Communications, NRC, 1997]. However, there is a new set of limitations in the upperband. According to the NTIA, "beginning at about 10 GHz, absorption, scattering, and refraction by atmospheric gases and hydrometeors (the various forms of precipitated water vapor such as rain, fog, sleet and snow) become the important limiting factors for electromagnetic wave propagation" [NTIA, 1995, part 2, Ch. 8]. Other important concerns are possible affects on human health of all microwave transmissions, especially those in the upperbands.
Above the range shown in Figure 4-33, lasers and infrared light are also used, mainly for WLAN applications. During every wireless generation, there has been a frequency shortage followed by new technologies (and regulations) that allow ever-higher frequencies to be harnessed for communications. Laser and infrared technologies are currently used for short distances such as within buildings or from one building to an adjoining building for data networking. TV remote controls are examples of laser or infrared devices. Like remote controls, lasers and infrared signals require close, unobstructed ranges to operate.
Figure 4-34 shows how rain, fog, and atmospheric gases (such as water vapor) can limit signal strength (and hence the coverage zone of a single tower) as higher wireless frequencies are used [NTIA, 1995].
On the x-axis are frequencies from 5 GHz to 500 GHz, with special attention drawn to the upperband between DEMS and WLL (from 24 GHz to 39 GHz). The y-axis depicts horizontal path attenuation (dB/km), showing how much signal strength is reduced due to various rainfall rates, fog, drizzle, and atmospheric gasses. If every 3 dB of signal loss results in the loss of 50 percent of a signal's remaining strength [Sheldon, 1998, p. 995], then the heavier the rain and the higher the frequency, the more likely the signal will suffer. The 3 dB threshold is reached just above 6 GHz for rain falling at a rate of six inches an hour and below 20 GHz for rainfall rates of one inch an hour. The threshold is reached within the upperband with rainfall rates of one-fifth an inch per hour, but not for drizzle. Atmospheric gases cut signal strengths in half only at frequencies well above the 39 GHz band where WLL technology operates.
Upperband technologies can be engineered around these problems through a site specific trial and error approach using early customers as experimental subjects. Nor is there scientific agreement about what kinds of reception problems can be expected. According to NTIA (1995) research, for every km (3280 feet) that an upperband signal travels through rain (falling at a rate of an inch or more per hour) result in as much as a 75 percent signal reduction. Papazian et al. (2000) found that trees in wind (located near business wireless equipment) could be worse than rain in reflecting or blocking upperband signals.
More generally, path loss, shadow fading, multipath fading, and interference are all problems that affect terrestrial wireless transmission. Path loss equals received power divided by transmitted power and is a function of distance from transmitter to receiver [NRC, 1997]. More formally, path loss, L is given by , where Pr is received power, Pt is transmitter power, f is the center frequency of the signal, and n is the path loss exponent. K is a constant that depends on path loss at a reference distance d0 from the transmitter, or the antenna far field (approximately 1 m indoors and 0.1-1 km outdoors) [NRC, 1997, p. 2-34]. Hence, "received signal power is proportional to the transmit power and inversely proportional to the square of the transmission frequency and the transmitter-receiver distance raised to the power of a path loss exponent" [NRC, 1997, p. 2-34].
Given that the path loss exponent is 2 in free space, but ranges from 3 to 5 in typical outdoor environments, exceeding 8 with dense buildings or trees,
systems designed for typical suburban or low-density urban outdoor environments require much higher transmit power to achieve the same desired performance in a dense jungle or downtown area packet with tall buildings. [NRC, 1997, p. 2-34]
As frequency rises, path loss tends to rise, while as distance increases, path loss rises as well.
The technical details of wireless transmission are subject to Shannon's Law, the Nyquist Theorem, and other physical laws governing analog-digital conversions and signal domain as discussed in 4.2. Gilder argues that the future of wireless is "boundless bandwidth, accomplished by the Shannon strategy of wide and weak signals, moving to ever smaller cells with lower power at higher frequencies" [Gilder, 1993, p. 11].
In spite of such a bright future, there are three important differences between wireline and wireless transmission as currently provisioned that should be noted before specific terrestrial and satellite technologies are covered. The first difference is that many mobile wireless CPE devices contain DTE, DCE, and an antenna in one unit. For fixed wireless technologies, there is typically more separation. Figure 4-35 (based on NTIA 98-349, 1998, p. 2) shows the components of a general wireless technology, operating at the local network or access level.
DTE perform DSP (Digital Signal Processing), analog-digital conversion (when needed), and interact with the user via display and controls. DCE are transceivers with power settings and electronics components for particular frequencies that send and receive RF (Radio Frequencies), microwaves, infrared, or other wireless signals. Then, for most carrier services, signals are sent over antennas, typically to a base station or tower that has intermediate DCE and antennas of its own. If the opposite end is wireless, the process is reversed. Wireless mobile and nomadic telephones (and other mobile user devices) contain all three components in a single unit.
However, fixed wireless connections typically have the antenna separate from the DCE and DTE. When a computer is used for wireless access to a carrier network (or as WLAN station) it serves as the DTE. Typically, in fixed wireless the computer uses a separate DCE (a transceiver) and local conduit to transmit over the antenna. From the antenna, transmissions travel to a carrier tower located at a POP (base station) where they enter a wireline or wireless carrier network (transport level). In a point-to-point system, transmissions do not require a carrier's transport network, since they are beamed directly to another antenna at another location of the business. In the point-to-point case, all equipment may be owned or leased by the agribusiness.
Another way wireless technologies differ from wireline is in the distinction between baseband and broadband. In wireless, baseband is the raw audio or video signal before modulation and broadcast. Thus, the original frequency range before modulation onto a higher, more efficient range is the baseband. For example, most satellite headend equipment uses baseband inputs. The input signal is unfiltered receiver output with FM modulated audio and data sub-carriers. Wireless broadband devices process a signal that spans a relatively broad range of input frequencies.
4.4.2 Terrestrial Wireless Technologies
As has already been mentioned, terrestrial wireless communication may be mobile or fixed. The first and second generations of wireless technology emphasized mobility and lower frequencies since they sought to provide only basic wireless telephony and low-speed data communications. The third generation saw mobile technologies operate in higher frequencies (such as the 2 GHz PCS band) while coverage zones expanded, power needs fell, data rates rose, and more services were supported. Simultaneously, third generation fixed technologies made dramatic leaps into upperband frequencies (24-39 GHz), attaining data rates as high as wireline technologies so as to support a range of hypercommunication services including voice, video, high-speed data and Internet.
In this section terrestrial (non-satellite) wireless technologies are briefly outlined. As with wireline technologies, almost every specific service named in sections 4.6 through 4.9 can be provided by terrestrial wireless technologies. The ability of terrestrial wireless technologies to serve as access paths for hypercommunication services depends mainly on signal frequency, user mobility, and the availability of appropriate antennas, DCE, and DTE. Table 4-10 details terrestrial wireless technologies used to support of mobile and nomadic services.
Typically, mobile user devices include DTE, DCE, and an antenna in a single unit. Mobile services are provided by carriers using a series of overlapping coverage zones (cells) each of which is served by a tower attached to base stations. As subscribers travel in their carrier's local footprint, calls are passed from one cell to another. Subscribers may roam regionally or nationally and use their own (or another) carrier's network if compatible technologies are available in the roamed area.
Mobile wireless voice services are discussed in more detail in 4.7.5, while paging is covered in 4.7.6. Mobile wireless data networking is covered in 4.8.5. Carriers extend service to subscribers based on one or more of the technologies shown in Table 4-10.
Sources: FCC 99-136, Hewlett Packard, 1999, Rysavy, 1997, Weinhaus, 1998.
Table 4-11 lists several fixed wireless technologies that are currently or soon to be available in Florida. For each technology, the typical frequency and channel bandwidth, data rate, and services supported are shown. Since fixed terrestrial wireless technologies are works in progress, the table cannot convey more than a broad general categorization. Hence, the specific technologies listed in the table are often imprecise terms, based on a melding of traditional FCC definitions, proposed frequencies, experimental tests, and implementations by carriers.
Sources: REA (1992); Bezar (1995), Teligent (2000); Rysavy (1997); Molta and Irshad (1999), Mandl (1999).
The first fixed terrestrial wireless technologies are WLAN (Wireless LAN) technologies. The first type of WLAN is infrared WLAN. Infrared adapters plug into token ring cards to allow localized transmissions (within 80 feet). While infrared can be used between buildings, it is so susceptible to fog, rain, and smog interference (since it is a form of light) that it is usually used for remote controls, wireless keyboards, and wireless computer mice.
The last WAN technology includes the 5.8 GHz spread spectrum technologies that make use of unlicensed spectra to operate on a single premise. Spread spectrum technologies allow low-power operation and reduce interference. In open spaces, 100 Mbps ranges for wireless LANs (such as the Breeze Net Pro.11 product line) can range from 1000m (3280 feet) in open areas to 60-200m (200-650 feet) inside buildings.
WLANs use all-wireless Ethernet and specialized hybrid technologies. An all-wireless Ethernet LAN replaces cabling among computers in a local network with wireless paths, so individual computers require antennas to transmit to the network host. With all-wireless Ethernet technologies, laptop computers connect to the WLAN through antennas in their PCMIA slots and remain portable in the office. Hybrid WLAN technologies use wireline cabling to connect each machine in a particular area to a hub (or other intermediate DCE), but use wireless paths from hub to central server.
The second WLAN technology, WCS, may be particularly useful for agribusinesses seeking to interconnect LANs inside a ten to twenty-five mile radius of a central site (creating WWANs, Wireless WANs), or to obtain Internet access. The 2.4 GHz WCS band is also unlicensed spectrum that can be used on the local or access level. Since spectrum is unlicensed, the FCC requires that spread spectrum technologies be used that make radio signals appear as background noise to unintended receivers. Numerous service providers in many parts of Florida are currently offering wireless Internet access in the 2.4 GHz band. However, interference from garage door openers, baby monitors, and other wireless equipment can occur in the unlicensed frequencies used by WCS. Specific WLAN applications are covered in 4.8.5, while Internet access is the subject of 4.9.1.
DEMS is a two-way all digital system that uses DTSs (Digital Termination Systems) on each end as DCE. An important characteristic of DEMS is proper antenna placement [Weinhaus, Lagerwerff, and Brown, 1999]. User stations require directional antennas over a 2-10 mile long path length [REA, 1992]. In 1998, Teligent began to provide the first DEMS service in Florida to metro Jacksonville, Tampa, Orlando, Palm Beach County, and Miami-Dade County.
Teligent's DEMS technologies use a two-step wireless layout. In the first step, at the access level, "When a customer makes a telephone call or accesses the Internet, the voice, data or video signals travel over the building's internal wiring to the rooftop antenna. These signals are then digitized and transmitted to a 'base station' antenna on another building, usually less than three miles away" [Teligent, 2000]. The DEMS base station functions as a POP for that area, gathering "signals from a cluster of surrounding customer buildings, aggregates the signals and then routes them to a broadband switching center" [Teligent, 2000].
LMDS and MMDS are similar in that both are multi-point microwave distribution technologies used to provide high-speed hypercommunications access. Both have been used to provide wireless cable TV, but new technologies allow two-way transmission. Some of the differences between LMDS and MMDS are frequency, bandwidth, and cell size:
MMDS is authorized 190 MHz of spectrum near 2.5 GHz and LMDS is authorized over 1 GHz of spectrum near 28 GHz. MMDS architectures are designed for fairly large coverage zones, up to 50 km across. Typical MMDS reflector antennas are up to 0.6 m (2 ft) in diameter. LMDS, on the other hand, uses small cells and small antennas, with roughly 3- to 8-km coverage radius and 10 cm (4-in) flat panel subscriber antennas. [Vanderau, Matheson, and Haakinson, p. 19, 1998]
Designed as "wireless cable TV", downstream MMDS signals can cover up to a 35-mile radius. Hence, MMDS technology is expected to have a broader market than LMDS and more coverage of to rural areas. However, until now, upstream (symmetric) MMDS has been limited to a six-mile range. LMDS, with a 5-mile coverage zone, seems more suited to multi-tenant (office building) businesses in urbanized areas, though a network of LMDS transmitters can cover any area. MMDS is often seen as a SOHO (Small Office Home Office) technology if the technology can become more adept at achieving symmetric data rates over distance and at sharing scarce frequency [Roman, 2000]. Typical MMDS or LMDS CPE includes a roof-mounted transceiver and antenna, an up/down converter to change signal frequency to frequencies usable by DTE, an NIU (Network Interface Unit), possibly a telephone interface, and an Ethernet hub or router. The Sprint ION service is an example of an LMDS deployment that features downstream data rates of up to 27 Mbps, with a 3.5 Mbps upstream rate.
WLL is another upperband technology christened as fiber in the sky. Wireless POPs are centrally placed in urban areas with access accomplished via MTE (Multi-Tenant Environment, office building) rooftop transmitter to wireless hub paths. Estimates are that only three percent of office buildings have fiber, but that they represent one-third of all business communication lines [Mandl, 1999]. WLL is targeted at this three percent (the power users), while LMDS, MMDS, and to a lesser extent 2.4 GHz are aimed at the 97 percent without fiber access, the smaller business customer.
CDMA technology is used to economize on spectrum, so individual clients may maximize data rates. The 39 GHz band in which WLL will operate can carry data rates of up to 155 Mbps over several miles. Winstar is a WLL provider in Florida, with the right to provide coverage from Jacksonville to Miami along the Atlantic Coast and from Citrus County south to Everglades City on the Gulf Coast.
The new upperband technologies are no panacea for rural areas. According to the NTIA, most applications are for dense urban MTE locations.
New and proposed terrestrial systems are being developed to exploit shorter range, cellular deployments able to serve a much denser subscriber base, using multipurpose digital bit streams. These emerging systems will need to be much smarter and more complex than traditional systems, and they will demand extensive infrastructure development and integration into existing telecommunication infrastructures. [Vanderau, Matheson, and Haakinson, p. 22, 1998]
Additionally, upperband technologies may not hold great promise for a high rainfall state such as Florida since the Southeast is the "worst area" for microwave signal propagation [REA, 1992, p. 1-9].
In spite of predictions by Gilder (1993) and others that wireless bandwidth would be boundless in 2000, InfoWorld has a different prediction for 2001. "Wireless users may hit a speed bump" because of limitations on spectrum and failure to deploy equipment in many areas [InfoWorld, October 29, 1999]. While terrestrial wireless technologies are developing slowly, inexpensive, symmetric satellite technologies are coming even more slowly.
4.4.3 Satellite Technology
Until fiber optics, satellite wireless appeared to have the most promise as a high-speed voice transport technology. Early 1960's satellites could carry 480 channels, compared with 256 then carried by telephone cable. By December 1988, TAT-1 the first fiber-optic transoceanic cable became operational and could carry 37,800 voice channels. By 1995, the TAT-12 and TAT-13 satellites (linking Europe and the US) could carry over one million telephone conversations at once [Stone, 1997]. At the end of 1997, over 180 communications satellites were in orbit, with industry sources expecting as many as 1700 additional satellites by 2005 [Rysavy, 1997]. It remains to be seen if future satellite technologies will carry many times this amount of converged voice and data traffic because of the high cost of orbital launches and the relatively high price of satellite DTE and DCE. However, satellite technology now holds promise in access, point-to-point, and transport levels for businesses regardless of location.
Orbital satellites use an uplink (earth-to-space) and a downlink (space-to-earth) to allow two-way communication. Special bands of the electromagnetic spectrum have been allocated for satellite use. Traditional satellite technologies use the C band and Ku bands. In the C band, 3.4-4.8 GHz is used for downlink and 5.85-7.1 GHz for uplink, while the Ku band uses uplink frequencies of 14-14.5 GHz with 10.7-12.2 GHz for the downlink. A new broadband Ka band of 19.7-20.2 GHz (downlink) and 28.35-28.6 GHz (uplink) is just beginning to see service.
Many satellite technologies are used to deliver wholesale services only to hypercommunication carriers. As with terrestrial wireless technologies, satellite services are classified as fixed or mobile. Fixed means that users on earth do not move, in contrast with mobile services where subscribers move. New technologies are under development by several large consortia to bring broadband (high-speed) services to individual businesses and wholesale customers. However, the costs of deployment and CPE are high enough that progress has been slow.
Table 4-12 shows several satellite technologies capable of providing both mobile and fixed hypercommunication services worldwide, now and in the future. Geostationary orbit (GSO) satellites and geosynchronous (GEO) satellites each orbit at altitudes of 22,300 miles (36,000 km) above the earth, revolving the earth once every 24 hours so they appear to be stationary. GSOs are GEOs that orbit directly above the equator. Geosynchronous satellites transmit both voice and data communications, but long delays are experienced. The delay arises from the fact that the uplink (earth to satellite) and downlink (satellite to earth) portion of the communication each travel at the speed of light. Each step results in a delay of a quarter of a second and up under the best circumstances [IEC, 1998, p. 19]. From three to five GSO satellites are deployed to get coverage of almost the entire globe. Larger constellations may be used to obtain faster data rates and greater overall capacity.
GSO technologies include GSO FSS (fixed satellite service) and GSO MSS (Mobile Satellite Service). GSO technologies have been implemented by several systems including Comsat (formerly part of AT&T), Intelsat, and Orion. However, ground equipment (CPE) at the businesses' location is extremely expensive and complicated at this stage. Hence, GSO technologies are normally used by large agribusinesses and other global organizations. Data or voice transmissions may be charged by the minute or by the kilobit.
Sources: Weinhaus, Lagerwerff, and Brown, 1999; Rysavy (1997); Hudgins-Bonafield (2000).
Advantages to GSO include worldwide coverage even of remote locations far from wireline connections. GSO technologies can also reach ships at sea (Inmarsat), airplanes, and every nation on earth with the satellite serving as access and transport level all in one. While the high latency of GSOs is annoying, echo cancellation and other innovations will improve the quality of GSO voice conversations. GSOs last an average of ten years before they must be replaced, double the length of the next type of satellite technology, NGSO [Hudgins-Bonafield, 2000].
There are several NGSO (Non-Geostationary Orbit) satellite technologies. However, by definition, continuous coverage of any single location on earth requires around the world coverage of similarly situated points. NGSO networks function in a similar way to terrestrial cellular networks in that each satellite has its own coverage zone within a network footprint. However, since the satellites are moving, coverage zones are moving as well whether the user is moving (NGSO MSS) or fixed (NGSO FSS).
DBS (Direct Broadcast Satellites) operate in the 12.2 to 12.7 GHz band (downlink, BSS) and the 17.3 to 17.8 GHz band (uplink, Feeder, FSS). Total bandwidth for uplink and downlink is 0.5 GHz [Weinhaus, 1998, p. 28]. DBS satellites are GEOs, so they exhibit high latency. Current systems such as Hughes' DirecPC offer Internet access with download rates of up to 2 Mbps, but require that the upstream end go through a normal telephone modem. Future iterations of DBS technologies are expected to allow a minimal wireless return path, though the emphasis of DBS is on the downstream rate.
MEOs are Middle Earth Orbit satellite systems. Their orbits are from 6,250 miles to over 12,000 miles above earth, in between the orbit of GSOs and LEOs. However, more satellites are needed in a MEO constellation than in GSO or GEO constellations. Between ten and twelve satellites or more are needed for global coverage by a MEO constellation. MEO technologies use up to 500-800 MHz channels in the C and Ka bands. MEO technologies support services ranging from simple messaging (9.6 kbps) to future rollouts of two-way Internet, voice and data networking with data rates of up to 6 Mbps expected. Because a single MEO satellite will be overhead any fixed site for only two to four hours before another orbiting satellite arrives, MEO technologies require coordination among satellites for uninterrupted coverage.
Another NGSO satellite technology is the LEO (Low Earth Orbit) satellite. These orbit from 150 to 1,000 miles above the earth, cutting down on the delay experienced with geosynchronous satellites. However, LEOs are costly to launch and operate because they require a constellation of satellites to cover the earth's surface. For example, the now bankrupt Iridium worldwide network was to have been composed of 66 satellites, while the Globalstar system will have 48.
LEO technology can provide point-to-point or person-to-person services to the entire globe except when atmospheric or solar conditions interfere. There is also significant risk of widespread system damage due to asteroids and meteorites that do not have the atmosphere to burn them up before they hit. LEOs travel at faster rates than MEOs or GSOs, adding to their risk of destruction. LEO construction costs of $6 billion and up per system have led to their development by international consortia, rather than any single firm. For example, (the now bankrupt) Iridium project had Qualcomm, Motorola, and Globalstar as partners.
There are two kinds of LEOs, big LEOs and little LEOs. Little LEOs are used primarily for paging, remote monitoring, vehicle tracking, and other GPS (Global Positioning System) applications. Little LEOs use frequencies from 137 to 400 MHz, offering bandwidths from 0.025 MHz to 0.85 MHz (downlink) and 0.15 MHz to 1.9 MHz uplink. Leo One and Orbcomm are two little LEO systems in common use. Little LEOs offer low-cost, low-speed transmissions of relatively small amounts of data worldwide with higher signal reliability than other space-based systems.
Big LEOs are intended to be the satellite equivalent of terrestrial cellular telephone service. Eventually, big LEO technologies will provide fixed and mobile voice, data, paging, and fax on a worldwide basis, even in rural areas and developing countries at limited data rates. Big LEO technologies use MSS frequencies between 1GHz and 3 GHz, generally around 1610-1625 MHz or 2483.5-2500 MHz, with a bandwidth of 16.5 MHz for each service link [Weisman, 1998, p. 39].
Today, big LEO technologies support satellite telephony and broadband PCS. In the future, big LEOs will support two-way data & Internet. Today, Internet data rates from big LEOs are typically around 400 kbps with the return (upstream) signal traveling via wireline telephone modem. Future systems will attain data rates of from 2 Mbps to 6 Mbps. Big LEOs move across the sky quickly, rotating above a single point on earth as many as 14 times per day. Hence, the technology must be flexible enough to allow one satellite to take over from another when the line-of-sight from the first satellite becomes obstructed.
Another type of big LEO satellite technology is broadband LEO. Unlike typical big LEO (and other NGSO) technologies with broadband satellite technology, cells on earth remain fixed, while the network of satellites rotates around earth. The Teledesic satellite network (an NGSO FSS) will be a broadband LEO system capable of offering data rates of up to 64 Mbps (downlink) and 2 Mbps and above (uplink), operating in the Ka band (20 GHz to 30 GHz). Service is anticipated to start in 2004. Broadband LEO systems are seen as an "Internet in the Sky", linking computers worldwide with high-speed connections. The bandwidth of service allocations is 0.5 GHz, with feeder allocations of 0.8GHz. Since they orbit between 150 and 400 miles above earth, broadband LEOs may be over a particular location on earth for only tens of seconds before a new satellite must take over. While broadband LEO technology is engineered to accomplish this, the degree of jitter depends on the total number of satellites in a carrier's constellation [Hudgins-Bonafield, 2000].
Teledesic is an example of an NGSO FSS system, while thirteen GSO FSS systems have been authorized by the FCC including Loral, PanAm Sat, and KaStar [Weinhaus, Lagerwerff, and Brown, 1999, p. 32]. Broadband GSO and GEO satellites have higher delays than LEO or MEO, but lower jitter because there is less variability due to handoffs from orbital changes. Both types of broadband FSS operate with channel bandwidths of 500-800 MHz, Future broadband LEOs are expected to feature data rates of up to 64 Mbps downstream with 2 Mbps upstream rates. Broadband GSOs use the same frequencies and bandwidths, but are expected to achieve upload speeds of 500 kbps with 3-6 Mbps download speeds [Hudgins-Bonafield, 2000].
Mobile Satellite Service (MSS) technologies operate near 2GHz, offering a total bandwidth of from 35-72 MHz for uplink and downlink. MSS can be provided by GSO or NGSO technologies. MSS technologies were originated by INMARSAT to enable communication with ships on the high seas [Telecommunications Engineering Centre, 2000].
Advantages of satellite technologies include their lower susceptibility to multi-path fading caused by reflection from objections surrounding the transceiver on the ground [NRC, 1997]. By beaming signals vertically, upperband signals show considerably less attenuation than when transmitted terrestrially. They also offer worldwide coverage to rural areas as well as urban ones. It remains to be seen which satellite technologies offer agribusinesses the greatest promise for future hypercommunication needs. For agribusinesses with a need to network directly with rural, third world operations, or foreign customers, satellite technologies may be the only possibility for years to come.
An important physical limitation to satellite technologies is the high path loss. The higher the operating frequency, and the greater the path distance, the more transmit power is needed to compensate. Satellite signals are attenuated by earth's atmosphere and by cosmic and solar noise. The NRC reports that while satellites offer advantages over horizontal fixed wireless transmission, adverse effects still occur "at frequencies above 10 GHz, where oxygen and water vapor, rain, clouds, fog and scintillation cause random variation in signal amplitude, phase, polarization, and angle of arrival. . ." [NRC, 1997, p. 2-6]. High-gain directional antennas help reduce upperband problems, but such high frequency digital transmissions are troublesome given Florida's climate. High deployment costs are another disadvantage of satellite technologies, with costs of devices and carrier equipment comparatively large.
4.4.4 Wireless QOS
As shown in Table 4-13, QOS in the wireless environment has more variability than in the wireline case. While new wireless technologies are constantly appearing to counteract QOS problems, wireless paths are subject to interference from man and nature. Satellite and mobile services have the more difficulty in establishing guarantees comparable to wireline carriers.
Sources: NRC, 1997; FitzGerald and Dennis, 1999.
There are several physical reasons wireless has QOS disadvantages: frequency, channel width, various kinds of interference, latency, jitter, and tighter technical specifications. As has been mentioned repeatedly, higher frequencies tend to have wider channel capabilities and hence, more bandwidth. This comes at the cost of shorter radio waves that can travel shorter distances, are more prone to interference, and may have negative health consequences. Interference, fading, path loss, and attenuation vary by technology, frequency, and location.
Even with fixed wireless technologies, there is more signal variability than with wireline transmission, propelling error rates higher. Latency and jitter interact with effective bandwidth, data rate, and error rate. Latency or delay depends on distance. Thus, GSO and GEO technologies are most affected followed by MEO and then LEO. All satellite technologies are affected more by delay than earth-based wireless technologies are. Finally, tight technical specifications such as antenna angles, humidity ranges, and antenna type can make wireless services hard to deploy without specially trained carrier and agribusiness personnel.
With few exceptions, wireless QOS is less controllable than wireline. Hence, it may be hard for agribusinesses to gain SLAs (guarantees of service) from wireless providers on par with wireline carriers. Next, the technologies and people needed to operate and interconnect hypercommunication networks are covered.
4.5 Support Services, Facilitation, and Consolidation Technologies
Support services, facilitation, and consolidation technologies cover a wide territory and there is only space to touch on a bare minimum. Support services (4.5.1) include the human expertise necessary to install, maintain, interconnect, and otherwise assist users and suppliers of hypercommunication technologies and services. Protocols and standards (4.5.2) are the building blocks necessary to improve hypercommunication markets and enable convergence, deregulation, and interconnection. Protocols, standards, and the various standards organizations are hypercommunication facilitation technologies because they create better markets.
Consolidation technologies are known as convergence-enabling technologies since they are examples of areas where convergence will occur. The first consolidation technologies to be covered are wireline-wireless facilitating technologies (4.5.3). Voice-data consolidation technologies (a second type of consolidation technology) are covered in 4.5.4. As both kinds of consolidation technologies become more widely used by agribusinesses, true converged hypercommunication will occur.
Separating support services, facilitation technologies, and consolidation technologies from the services and technologies of specific sub-markets is somewhat artificial. Later, these three subjects are placed in the context of the specific sub-market for specific services and technologies (4.6-4.9). There is an historical component to the separation in that support services are necessary now, facilitation technologies are under constant development, and consolidation technologies are a promising area for the future. However, by separating support, facilitation, and consolidation, agribusinesses will better understand where they will have to invest money, what differences exist among carriers, and where convergence will come from.
4.5.1 Support Services
The dollar cost to business of support services, the human side of hypercommunications, will reach twice the level of equipment costs by 2003. Business expenditures on services to support voice and data equipment are projected to grow at an annualized compound rate of 19.5 percent through 2003 to reach $237.1 billion, more than double the projected $112.6 billion equipment market for 2003 [TIA-MMTA, 2000, p. 94]. Costs for support and integration services of voice-data communications equipment stood at $116.4 billion in 1999. There are two fast-growing categories of support services.
The first support service category is professional and technical services. This area includes network integration services, consulting, and time and materials for engineers, computer scientists and programmers, and other professionals. In 1999, professional and technical services represented $73.8 billion of the total support service bill for American enterprise. The tab is only expected to grow larger. The second support services category are field maintenance and repair services. In 1999, expenditures here stood at $31.1 billion, with the expectation that the level will continue to rise.
Support services tend to be specific to the services and technologies they support. Thus, some additional coverage occurs in the context of the sub-markets discussed in sections 4.6 through 4.9. Agribusinesses should realize that their CPE and the equipment of their carrier are only as good as the people who service it at the carrier end and the people who use and maintain it at the agribusiness. Furthermore, having employees or consultants whose judgement can be trusted is vital to choosing the right equipment to begin with.
4.5.2 Protocols and Standards for Hypercommunications Networking
Protocols such as (TCP/IP, SS7 signaling, and V.90) may be created through scientific agreement or by government, self-regulatory or independent bodies. Hypercommunications protocols and standards help consumers and suppliers avoid the deadweight loss of endlessly searching technical specifications or creating new ones by establishing mutually agreed upon definitions, along with interoperability and interconnection of technologies and services. This sub-section has three main purposes. First, to classify and explain the sources, bodies, and technical types of standards and protocols. Second, to place hypercommunication standards and protocols in an economic context of interest to agribusinesses. Third, the section outlines a couple of strategic areas where protocols and standards have not yet been agreed upon, placing emphasis on their importance in business settings.
The term protocol is often used interchangeably with standard. According to the FCC, protocol and standard have two separate meanings each. First, protocols are "a formal set of conventions governing the format and control among communicating functioning units". Second, protocols are also "a formal set of procedures that are adopted to facilitate functional interoperation within" a "layered hierarchy" [GSA, FED-STD-1037C, 1996, p. P-25]. The IP (Internet Protocol) is one of the better known examples, underscoring the tendency for protocols to be used in communications networking software and hardware.
Standards are "guideline documentation that reflects agreement on products, practices, or operations." [GSA, FED-STD-1037C, 1996, p. S-26] A second definition of standard as "a fixed quantity or quality" is frequently used [GSA, FED-STD-1037C, 1996, p. S-26]. A standard is more likely than a protocol to describe hardware (e.g. wiring standards) or to characterize physical quantities such as specification ranges (e.g. engineering operational standards). While the two terms are used almost interchangeably in this section, precise meanings (when important) should be clear in context.
Economist Paul David established three classes of standards [David, 1987] that some authors consider the "best classification system for telecommunications (standards)" [Stone, 1997, p. 122]. Reference standards deal with weights, measures, and other units. These tend to be agreed on by international standards bodies and are generally objective, scientific-based standards. Minimum attribute standards are the minimum acceptable characteristics associated with the deployment of a hardware or software technology. Compatibility standards permit interconnection of CPE and other hypercommunications components such as hardware and software to work in conjunction with an entire network or system.
To these three, security standards could be added. These are standards designed to prevent access to certain information due to privacy, or the desire to sell intellectual property such as hardware, software, or content. Authentication CGI scripts, e-commerce software (such as shopping carts) and hardware (such as secure servers) are common Internet examples. Other computer examples are virus blockers, cookies, automatic registration and upgrading of browser plug-ins, un-uninstallable programs, etc. A particularly controversial case involves cryptography, where e-mail and other communications traffic are sent in coded form that only can be understood with a key at the other end. Federal law enforcement and national security reasons have been given to prevent the export of certain kinds of cryptography software [Internet Week, June, 15, 1998, p. 1].
When IAB (Internet Architecture Board) protocols advance to become draft standards, the technical community is on notice that "unless major objections are raised or flaws are discovered, the protocol is likely to be advanced to a standard in six months"[IAB, 1995, p. 2]. Protocols and standards have both a state and a status. The state shows at what stage of the standards track (maturity level) a particular protocol has attained. For example, a new protocol occurs after an RFC (Request for Comment) has been issued. In an RFC, users and scientists are asked to comment on a particular proposed protocol. Then, after debate and dialog, that proposed protocol becomes a draft standard, then a required or recommended standard.
Protocols and standards may come from several sources, both formal and informal. There are formal de jure standards and protocols, created and altered by official trade, governmental, or scientific standard setting bodies. In some cases, these formal standards and protocols lag behind technology, while in others they lead technology. In still other cases, de jure protocols and standards describe technologies not yet introduced (and possibly never marketable). Due to the cumbersome process of adopting or altering standards (by international committee), de jure standards are notorious for being time consuming.
De facto standards and protocols arise from informal sources such as the marketplace or a single vendor [Tower, 1999]. For example, Microsoft developed MS-DOS, Windows 95, Windows 98, Windows 2000, and NT as de facto, proprietary operating system standards. When the marketplace adopted Windows as a de facto OS standard, it got built-in data communications protocols also developed by Microsoft. Windows data communications protocols are a mix between open, de jure protocols such as IP and TCP and proprietary, de facto protocols such as NetBEUI and Microsoft Front Page extension web hosting. In some cases, de facto standards and protocols become de jure when formal sources adopt them. The de jure adoption (or non-adoption) of de facto standards may occur due to market power and/or technical reasons. A central tenant of network economics is that networks rely on protocols and standards to operate. This reliance is so great that path dependencies occur (see Chapter 3.6) so the economy can veer off an optimal path [Kelly, 1998].
Several examples illustrate the importance of standards. The first two examples concern how changes in standards can completely change the market. The FCC changed FM frequencies in 1945, making FM radio receivers built before then obsolete. Another example came in when a 1963 law mandated that all television sets be equipped with both VHF and UHF channels. In both cases, a change in de jure standards made existing equipment obsolete. The relatively recent struggle to create a de jure 56 kbps modem standard (V.90) from two incompatible de facto, proprietary standards (x2 and K56 Flex) is a further example. In the 56k case, a standard was agreed on that allowed both proprietary technologies to continue under a single unified standard.
Protocols and standards are often behind the confusion between services and technologies. The source of confusion is rooted in data communications according to Klessig and Tesnick:
This must be related to the data aspect since no such confusion exists for voice services. No one chooses their interexchange telephone service provider based on the protocols used to control the carrier's telephone switches. [Klessig and Tesnick, 1995, p. 1]
Protocols have already enabled the telephone, telegraph, television, and radio sub-markets to exist and thrive. They are necessary to fuel continued growth in newer spheres such as Internet access, wireless technologies, and e-commerce services, if they can be agreed to quickly.
There are several kinds of standards organizations. The first is the independent testing organization. Underwriter's Laboratories, Inc. (UL) is an example. These organizations charge vendors a fee to test and certify the safety and sometimes the efficacy of equipment such as conduit and hardware devices. Another kind includes governmental agencies. The FCC imposes de jure standards on communications equipment to prevent interference with other communications and to ensure that connection of a DCE or DTE to a network (either CPE or carrier) does not harm other devices. For example, download speeds of certain 56k modems are restricted to 53 kbps because of FCC regulations designed to prevent excessive power from damaging elements of the telephone network. The third kind of standards organization is the independent national or international de jure standards body. A fourth kind is the scientific society.
In the United States, ANSI (American National Standards Institute) is a membership-supported, non-profit organization founded in 1918 by several engineering societies and US government agencies. ANSI does not develop standards itself, but encourages their development by coordinating communication among stakeholders (qualified engineers, government agencies, and others). ANSI is a member of the ISO (International Standards Organization), an international de jure body. The ISO is an organization similar to the United Nations. The ISO developed the seven layer OSI model discussed in Chapter 3.
Historically wireless standards evolved from the CCIR (International Radio Consultative Committee) which became the first international standards organization in 1927. The 1932 Conference of Madrid replaced CCIR by establishing the ITU (International telecommunications Union), the international body with jurisdiction over communications. ITU became a UN agency after World War II. IFRB (International Frequency Review Board) was part of ITU, until spectrum technologies and use required a new body, the CCITT (International Telephone and Telegraph Consultative Committee), formed in 1956. Intelsat (International Telecommunications Satellite Organization) was formed in 1964 after the U.S. created Comsat (Communications Satellite Corporation).
Many wireline standards and protocols also come from the ITU, which includes representatives from government, telecommunication companies, and industry organizations. Wireline standards include SS7 (Signaling System 7), established by the ITU, RJ-11 telephone cable (ITU), and numerous standards and protocols developed by AT&T's Bell Labs (now part of Lucent), and the RBOC (Regional Bell Operating Companies) successor to Bell Labs, Bellcore, now Telcordia.
Data communication standards and protocols form the basis of hypercommunication networks today. The IEEE (Institute of Electrical and Electronic Engineers) is behind many data communication protocols and standards in the physical and data link layers. IEEE standards include RJ-45 Category 5 cable jacks and Category 5 cable. Other IEEE standards and protocols are numbered such as 802.3 (10 Mbps Ethernet), 802.5 (token ring), and 802.11 (wireless LANs).
The IETF (Internet Engineering Task Force) is the main Internet standard and protocol body. The IETF sets up working groups (whose members are selected from thousands of Internet firms and organizations) to establish proposed standards for the Internet that are then voted on. The IETF established HTTP (web transport), POP (e-mail Post Office Protocol), and TCP/IP (Internet network layer protocols). The idea for TCP came from Vinton Cerf and Bob Kahn's 1974 paper, "A Protocol for Packet Network Intercommunication" [Zakon, 1997, p. 3]. TCP and IP together became the protocol of choice for ARPANet (the U.S. DOD precursor of the Internet) in 1982.
Table 4-14 provides an overview of some common interconnection, interoperability, and privacy protocols. For each standard or protocol, the standards body, purpose, and major users are listed. The first of these, SS7 plays an important role in services provided by ILECs and ALECs such as ISDN and advanced telephony. SS7 allows fast call setup and remote database interactions. Without SS7, there would be no "portable" 800 numbers, cellular roaming, caller ID, Enhanced 911, Custom Local Area Signaling System (C/LASS), or Advanced Intelligent Networks (AINs) across CO networks. SS7 is "out of band" signaling, freeing circuits of overhead and extending capacity [Nortel, 1997].
TCP/IP is the main communications protocol of the Internet. The higher-layer, TCP (Transport Control Protocol) divides files (e-mail, HTML, graphic, URL request, etc.) into efficiently sized packets for routing over the Internet. Each packet has the IP address of the destination and can travel a different route to reach that point. Upon receipt at the other end, TCP reconstitutes the file in the correct order. TCP-based protocols include Ping, Telnet, FTP, and HTTP. The lower-layer IP handles the addresses of each packet.
HTTP (HyperText Transfer Protocol) and POP (Post Office Protocol) are widely used in Internet communications. ATM (Asynchronous Transfer Mode) is an increasingly used high-speed transport (and now access) technology to be covered in 4.8. The V.90 modem standard came from a compromise between the K56flex (Rockwell) and x2 (US Robotics/3Com) proprietary standards. HTML is the standard way static web pages are created. Frame relay (covered in 4.8) is used for WAN and Internet access by medium-sized firms provided over copper.
The S.100 interface was created to speed the convergence of voice and data. DSVD is a proprietary standard created by Rockwell and several other companies to allow modem users to remain online while receiving or making telephone calls. SSL (Secure Sockets Layer) is a method of guarding the security of online transactions. CDPD (Cellular Digital Packet Data) is one of several standards created to allow data transmissions over wireless (in this case cellular) connections. H.320 is an umbrella standard that covers the transmission of video over digital connections, such as the Internet.
Protocols and standards provide an important economic benefit: they help to prevent market failure. This is because some degree of uniformity or standardization tends to help markets perform better by accelerating adoption of new technology. This accelerated adoption occurs as compatible DTE, DCE, hardware, and software are purchased by the market.
However, in some ways, standards may cause market failure. In the computer networking business standards bodies are manipulated by firms with market power as Saunders points out:
The real reason net managers have so many high-speed LAN standards to choose from has little to do with their best interests and a lot to do with the self-interests of the vendors that manufacture and market network gear. Equipment makers are under the gun to deliver products that conform to industry standards--any standards. Standards-based products sell better than proprietary offerings; in fact, some venture capitalists are making standardization a condition of further investment in networking companies. Most companies in the networking business are tiny by today's global business standards, which means they can't afford to ignore demands by potential investors.
Rather than pool their efforts into creating a single set of standards, vendors have pushed hard to get their various technologies stamped with an ANSI or IEEE seal of approval--a procedure that has proved ludicrously easy since standards committees are made up almost exclusively of vendor representatives. This approach may legitimize competing technologies, but it can only spell trouble for network managers, who lack an official ombudsman to voice their opinions and concerns. [Saunders, 1996, p. xvii]
In addition to the tendency for standards to serve as mechanisms for maintaining market share, many standards bodies have been criticized because of the lack of open debate and user access to existing and proposed standards. The Internet has tended to have a more open approach to standards that has less chance of being captured by powerful industry forces than the telephony or data networking areas. For convergence to occur, both the gap between wireline and wireless and the chasm between voice and data must be bridged. The hypercommunications market is attempting to standardize convergence through wireline-wireless facilitating technologies (4.5.3) and voice-data consolidation technologies (4.5.4).
4.5.3 Wireline-Wireless Facilitation Technologies
Technologies used to facilitate hypercommunications between wireless and wireline networks are of two types: PSTN-based and data synchronization. For services that rely on connection to the PSTN, protocols and standards exist to interconnect wireless and wireline networks. These protocols and standards came about based on the desirability of having mobile telephone units able to connect with any other telephone (mobile or wireline) in the world. Since the wireline PSTN was in existence before cellular, PCS, GSM, and other mobile wireless devices were, it was necessary only to connect base stations that received and sent wireless signals to wireline PSTN. Cellular and other mobile wireless carriers also established standards for data transmission using wireless modems so data communications could occur between wireless networks and data networks connected to the PSTN.
One open, wireless-wireline facilitation technology is CDPD (Cellular Digital Packet Data). CDPD was created to "address the need of subscribers to be able to rapidly transmit a small amount of data and not tie up a cellular radio channel for a long period of time" [Harte, Prokup, and Levine, 1996, p. 376]. In 1995, 12,000 cell sites had CDPD capability [Harte, Prokup, and Levine, 1996, p. 377]. By 2000, CDPD was available in every major urban area of Florida.
Another PSTN-wireless facilitation technology is the proprietary iDEN™ (Integrated Dispatch Enhanced Network) standard, an exclusively digital TDMA system, formerly known as MIRS (Motorola Integrated Radio System). Typically used in 800 MHz SMR (Specialized Mobile Radio), MIRS was developed for FleetCall, now Nextel, which had radio coverage of almost all of North America by 1995. Recent iDEN™ technologies allow up to 64 kbps of data on a single 25 MHz radio channel and the simultaneous combination of data and voice. This technology allows both voice and data calls via the PSTN, as well as radio voice and data communications to a private network [Harte, Prokup, and Levine, 1996]. In 2000, Nextel's Florida network covers Tallahassee, Jacksonville, Miami, Palm Beach, Tampa, Orlando, and other urban parts of Florida.
The second type of wireline-wireless facilitation technologies involves device and data network synchronization. Here, the protocols allow wireless and wireline computers, laptops, PDAs (Personal Digital Assistants), mobile telephones, and other devices to synchronize data files and data communication among devices. There are several standards being developed by industry groups, most notably the SyncML Initiative. SyncML is being developed by IBM, Lotus, Motorola, Nokia, Palm Inc., and over 800 other firms. The idea behind SyncML is that networked data on mobile wireless devices needs to be as closely updated (synchronized) as possible with corresponding data on both nearby and remote wireline networks [SyncML, 1999]. Thus, for example, a salesperson's laptop computer would be immediately updated with an inventory change even if the computer were away from the home office. Similarly, if an order were placed at a customer's office, the salesperson would not have to return to company offices or dial-in by modem to place it. SyncML is still being established, but it has great promise for agribusinesses.
Another kind of wireline-wireless facilitation technologies are Bluetooth technologies. Like SyncML, Bluetooth technologies operate with both wireless and wireline equipment. However, Bluetooth technology is meant to facilitate the complete interconnection of fixed and nomadic (rather than mobile) wireless equipment with wireline and wireless devices on a short range, low power basis only. Bluetooth is a consortium led by Ericsson, IBM, Intel, Nokia, and Toshiba charged with designing internetworking standards for all kinds of wireless (and some wireline) devices. Wireless devices will interconnect with each another (and to a main wireline network) at data rates of up to 725 kbps over ranges of up to thirty feet [InfoWorld, 21(33): August 16, 1999]. The market research firm Dataquest estimates that by 2002, 79 percent of digital handsets (and hundreds of millions of PCs) will be Bluetooth-capable.
4.5.4 Voice-Data Consolidation Technologies
Voice-data consolidation technologies support what are variously known as convergent solutions, unified messaging, and FSN (Full-Service Networks). The idea is either to merge voice and data pipelines or converge voice and data DCE or DTE (or both). Voice-data consolidation is already a reality at the transport level, with a data-centric definition of voice expected to predominate. Half of BellSouth's transport network traffic is already data, with the expectation that by the year 2008 less than ten percent will be traditionally defined voice [Ackerman, 1999].
The technologies and economies of scale that are behind consolidation at the transport and access levels are now becoming achievable through new technologies in local networks for small agribusinesses. Voice and data traffic may be merged throughout an agribusiness' operation, only on the carrier side of the demarcation line, or both. While still a relatively new area of technology, voice-data consolidation technologies include advanced call center technologies, CT (computer telephony), VOIP (Voice over IP), VOF (Voice over frame relay), interactive voice, and voice processing equipment.
The ECTF (Enterprise Computer Telephony Forum) is a standards body specifically dedicated to creating interoperability agreements among the hundreds of telephone equipment, PBX, and computer software vendors [ECTF, 1997]. The ECTF is charged with developing APIs (Application Program Interfaces) to allow desktop computers, network clients, and host computers to control incoming and outgoing telephone business calls. The technologies involved include: voice compression/expansion, text-to-speech, voice recognition, fax, fax-to-text, desktop telephony, screen telephones, and hearing impaired devices [ECTF, 1997, p. 6]. The essence of ECTF voice-data consolidation technologies is that all hardware, software, DTE, and DCE within a business would be fully interconnected and interoperable.
An important voice-data consolidation technology is CTI (Computer Telephony Integration). CTI was the fastest-growing category of business telecommunications equipment achieving a 67 percent increase in spending from 1998-1999 [TIA-MMTA, 2000]. CTI is a collection of applications that go beyond the familiar automated attendants, (ACD, Automated Call Distribution), and IVR (Interactive Voice Response) systems that prompt telephone callers for an account number or use speech recognition to connect calls. CTI includes the recording, storing, forwarding, and broadcasting of voice mail, fax-on-demand, automated outbound dialing for telemarketing offices, and inbound "screen pop" applications that enable businesses to pull up a customer's record on the computer screen as calls come in.
CTI requires new hardware, software, and standards to realize convergence of telephone and computer networks. CTI is not new having been introduced to large catalog sales, telemarketing, and other firms in the early 1990's. However, new CTI technologies make CTI affordable even for small businesses, while previously proprietary devices such as PBXs and telephone sets are able to work with computers in novel ways. Two examples merit brief attention.
MVIP (Multi-Vendor Integration Protocol) is an open, de facto standard begun in 1990 that creates a multiplexed digital telephony highway inside a business controlled in one computer chassis, the communications server. MVIP standardizes connection of digital telephone traffic between individual circuit boards so that telephone conversations can be manipulated like any other kind of computer data [GO-MVIP, 1999]. MVIP supports telephone switching using digital switch elements inside the circuit boards of normal PCs. MVIP software standards allow a range of compatible products from compliant vendors worldwide to be used. CTI applications supported by MVIP include call management, fax, voice (live, stored and forwarded), text-to-speech, speech recognition, data and Internet communication, along with digital circuit switching.
The objective of an MVIP bus is to carry telephone traffic. It allows the telephone network connection to be separate from digital voice processing resources, so the telephone connection or PBX may be obtained from one vendor while the voice processing resources are obtained from others, saving businesses money. A single MVIP-90 bus has the capacity of 256 full-duplex telephone channels [EAGLES, 1997, Node 80].
Another de facto voice-data consolidation technology is SCSA™ (Signal Computing System Architecture). Dialogic (a division of Intel) announced the SCSA™ initiative in 1993 along with several dozen other computer telephony vendors. Like MVIP, SCSA™ is a telephony bus. However, SCSA™ can "interface to the public network or the PBX, perform a variety of voice-processing functions, recognize DTMF digits, and then initiate an outgoing call or switch to additional resources (e.g., fax-on-demand service, voice recognition, or text-to-speech)" [Byte Magazine, November, 1996].
According to Dialogic, SCSA™ is a high level call processing architecture with a holistic, multi-layered hardware and software foundation so firms can build call processing systems using standard interfaces but select from a variety of competing technologies. SCSA™ is an open, hardware-independent, software-independent architecture with the objective of providing standards that allow portability, scalability, and interoperability with different software applications [Dialogic, 1997]. An SCSA™ bus has a capacity of 2048 time slots (for a PC) on its bus called Signal Computing Bus so that CT hardware from many manufacturers can be run on a single communications server [EAGLES, 1997].
Taken together, MVIP and SCSA™ are examples of consolidation technologies that are enabling convergence, removing market power of equipment makers, and lowering acquisition costs and recurring expenses of hypercommunications CPE. These and other consolidation technologies are discussed in 4.7.1 and 4.7.2 when enhanced telecommunications CPE and CTI services are covered.
4.5.5 Preface to Specific Sub-Markets: (Sections 4.6 through 4.9)
In sections 4.3 through 4.5, several broad areas of hypercommunication technologies were considered. Before becoming more acquainted with the sub-markets for specific services and technologies, it is important to reiterate that there can be a difference between a service and a technology. According to Dan Lynch in the foreword to Klessig and Tesnick:
They (the authors) point out emphatically the difference between a 'service' offering and a 'technology' offering. Thus, they explain what is different about SMDS and ISDN, frame relay, ATM, and SONET: SMDS is what the customer 'sees'; the others are technologies that the carriers use to deliver the services. [Klessig and Tesnick, 1995, p. vi]
This confusion between services and technologies becomes worse as converging delivery technologies blur the traditional distinction between voice and data. Indeed in the short time that has passed since 1995, new technologies and new services have been developed that make distinction more difficult. In some cases particular services and technologies were synonymous and are not now, while in others the distinction is historical. Furthermore, it is often easier to sell agribusinesses something new and complex when carriers use an existing concept, even if it brings a completely new meaning.
The next sections (4.6 through 4.9) cover four specific hypercommunication sub-markets, summarized in the services-market matrix found in Table 4-15. Many services can be provided by multiple markets and multiple transmission technologies (or protocols) covered in 4.3 through 4.5. For example, a local telephone call (a specific service) can be made within three sub-markets: traditional telephony, enhanced telecommunications, and Internet. Physical access to each of these markets may be wireless or wireline.
The first sub-market is traditional telephony services (4.6) such as local and long-distance telephone coupled with related enabling technologies (switches, trunks, and telephone sets). The second sub-market is enhanced telecommunications services (4.7), which include caller ID, CTI services, digital PCS, dedicated circuits, and circuit-switched services. Also given brief coverage are software and hardware technologies (AIN, DMS-100 switching, and SS7 signaling) that enable the enhanced sub-market. The third sub-market, private data networking services (4.8) includes packet and cell-switched services and specific supporting technologies. The fourth sub-market, which includes Internet services (from e-mail to web design), is explored in 4.9.
The row elements of the matrix are specific services an agribusiness might need, while the columns show the sub-markets capable of delivering those services. The sub-markets shown are already collapsing into a single hypercommunication market as convergence occurs. The separation of sub-markets simply demonstrates current market definition and structure. Of course, the expected result of convergence will be a hypercommunication market with an entirely different market structure with profound implications for agribusinesses.
NOTES: (1) iDEN technology only, (2) not yet location specific, (3) depends on service area.
The extension of hypercommunication services to rural Florida and agribusiness depends heavily on the production economics of the technologies (for carrier and customer alike) used to deploy technologies and deliver the market for services to rural and urban agribusinesses alike. As new services become available, agribusinesses must consider their own needs to see how new choices fit into present or future business strategies.
4.6 The Traditional Telephony Market
This section covers the sub-market for traditional telephony services and technologies. There are two parts of the traditional telephony market. First, traditional telephony includes analog POTS (Plain Old Telephone Service) over a local copper loop or trunk. Second, traditional includes long-distance calling, directory, operator, and other services generally available once the December 31, 1983 MFJ (Modified Final Judgement) broke the AT&T Bell System into LATAs and RBOCs. Traditional services are part of a Bell System inspired regulatory mindset (at both the federal and state level) that gave the U.S. monopolized telephone service.
Traditional services originated from what was (before 1984) arguably a textbook example of natural monopoly because of the technologies available then. The high fixed costs of infrastructure investment, engineering, design features of the network, and other factors combined to give AT&T a legally defined natural monopoly, which was regulated by state and federal authorities. From 1984 to 1996 the RBOCs (Regional Bell Operating Companies) such as BellSouth (which resulted from the breakup of AT&T), along with independent carriers such as GTE and Sprint served as local monopolies in non-overlapping service areas within Florida. Several smaller rural telephone companies (such as Indiantown Telephone, Quincy Telephone, and St. Joseph's Telephone) served rural parts of the state.
Today in Florida, traditional telephone services are provided by LECs (Local Exchange Carriers) and by IXCs (Interexchange Carriers). However, deregulation has changed both the local and long-distance markets from the AT&T days. In the local market, the first kind of LEC, the ILEC (Incumbent Local Exchange Carrier) is the monopoly telco that serves a particular area or exchange. The ILEC is also the owner and builder of the telephone plant (especially the last mile segment). All other LECs are ALECs (Alternative Local Exchange Carriers) established through state and federal de-regulation specifically to compete with ILECs. ALECs can resell ILEC services and offer their own services in addition to exchanges (or parts of exchanges) wherever they choose. ALECs normally use the ILEC's facilities in the access level, though facilities-based ALECs have their own switches, POPs, and transport level equipment. Even after the 1996 TCA, ILECs are legally "carriers of last resort" who must furnish basic telephone service to all reasonable locations within their service territories even if ALECs are unwilling or unable to provide service.
IXCs are long-distance carriers who also use part of the ILEC's local loop to carry long-distance calls to a POP for transport. While only one ILEC can serve a particular location, 363 ALECs and 988 IXCs offer service in Florida (especially in urban areas) now that traditional services have been de-regulated [FPSC, 2000]. In spite of these numbers, only 4.3 percent of Florida businesses used an ALEC rather than an ILEC for local service as of March 1999 ["You Have to Make the Call on Phone Service", Greg Groeller, Orlando Sentinel, March, 15, 1999].
This section has two sub-sections. The first sub-section (4.6.1) covers traditional services: local calling, other local services, and long-distance services. Then, traditional telephony access technologies and CPE (4.6.2) such as telephone sets, key telephone systems, and PBXs are mentioned. Typically, the access technology used (lines, trunks, PBXs) depend on the POTS equipment selected by a business.
4.6.1 Traditional Telephone Services (POTS)
In 1998, Florida ILECs, ALECs, and IXCs realized $15 billion in telephone revenues, up almost 30% from the 1995 level [FCC, "Telephone Trends", 2000, p. 19-7]. It might appear as though revenues from these traditional telephony services (provided by ILECs and IXCs) would consist mainly of charges for local and long-distance telephone calls. In reality, however, the picture is more complicated because local service revenues consist of a mix of PSTN access charges, toll calls, and many miscellaneous revenues. Forty percent of what IXCs collect in long-distance charges goes back to the LEC as access charges.
In fact, as Table 4-16 shows, there is a broad array of traditional business telephony services beyond telephone calls. As the columns show, ILECs may make more money on directory advertising, for example, than they do by providing special local calling services. Indeed, depending on the agribusiness, a larger share of its telephone budget may go towards local calls or directory advertising than is spent on long-distance calling.
The first item in Table 4-16, lineside local telephone service, includes an inbound-outbound voice grade (analog) line or multiple lines, along with local calls. Depending on the local exchange, an agribusiness may be able to choose from using the ILEC or from dozens of ALECs for local telephone service. Typically, unlimited calls to the local exchange together with calls to a group of surrounding exchanges serving nearby areas (the local calling area) are included in a monthly per line rate. However, some new carriers charge for local calls, especially if the line is meant for inbound service only.
Estimates of revenue shares from FCC, Statistics of Common Carriers, 1999, p. 41, pp.166-171; FCC, Telecommunications Industry Revenue: 1998, Table 5 and Table 6, 1999.
To understand the differences among the local exchange, local calls, enhanced calling zones, and extended local calling zones better, an example should help. Figure 4-36 shows a group of exchanges in Hardee, Desoto, Highlands, and southern Polk counties. Within each local exchange, most local providers offer unlimited calling for a monthly fee. The local exchange may include several RAVs (wire centers), and one or more class five COs (see Table 4-8), each with one or more telephone prefixes (NXXs).
The Arcadia exchange (which covers all of Desoto County) has four NXXs (444, 491, 494, and 993) on one CO. Zolfo Springs has a single NXX (735) in a single CO as does Bowling Green (375). Each exchange has a different kind of FPSC-mandated local calling plan ILECs must offer. Local calling plans are determined by analyzing calling patterns to obtain a PAHS (Probable Area of Highest Service) pattern for calls made from a particular exchange. Local, enhanced local, and extended local calling areas are configured based on the PAHS.
In Arcadia, local calls may be made only within the Arcadia exchange. Calls to Port Charlotte, Wauchula, and Zolfo Springs from Arcadia (extended local calling) are $0.25 per minute for residences, and $0.06 per minute for businesses. The Zolfo Springs exchange can call Bowling Green and Wauchula as a free local call, and can call Arcadia at extended calling rates similar to those for Arcadia to Zolfo Springs ($0.25 and $0.06). Bowling Green customers may call Wauchula and Zolfo Springs as free local calls, but both business and residential calls to Fort Meade (enhanced local calling) are charged a fixed $0.25 fee regardless of length. Calls from these local exchanges to exchanges outside local, enhanced, or extended areas are classified as long distance. ALECs are free to offer larger extended and enhanced calling zones to their customers, providing that LATA boundaries are not crossed. In some cases, if an agribusiness switches to an ALEC that offers a larger calling area significant savings will be achieved.
Local operator services are provided by the local service carrier (ILEC or ALEC) or by any of the 98 operator service providers in Florida [FPSC, 2000]. Long-distance operator service is provided by the presubscribed long-distance carrier (IXC) or by an operator service provider chosen by the IXC. While no charge is assessed for dialing the local operator to report an emergency, a charge may be assessed for other services per call such as emergency line interrupts and line testing. Long-distance operator services such as long-distance call assistance to obtain credit for wrong numbers or to report call quality problems are usually free. Both local and long-distance operators can provide collect, station-to-station, and person-to-person calls. However, local operators (used in this sense) means intra-LATA long-distance only. Charges for operator-assisted calls are up to several times the setup and per-minute rate of direct dial calls.
Repair, installation, and line maintenance are offered by local telephone companies to subscribers who are adding a line, reporting telephone problems, or who want the LEC to be responsible for inside (customer premises) telephone wiring. Installation fees are changed for every line or trunk connected to the PSTN whether or not the LEC actually installs the wiring at the customer's premises. On-premise wiring and repairs are customarily billed at hourly rates, unless the subscriber pays a monthly inside wiring maintenance charge. In no event are customers supposed to be charged for repairs on the PSTN side of the demarcation line.
Telephone signaling (distinct from the signal that carries the call itself) refers to ringing, dial tone, busy signals, and the dialing technology. An example of extra signaling charges on traditional service includes DTMF (touch-tone) dialing, while custom ringing or stutter dial tone with voicemail are examples of signaling charges for enhanced telecommunications service (explained in 4.7). CPE must be compatible with the serving LEC's signaling technologies, a problem that can lead to telephones that do not ring or offer dial tone (especially possible with trunkside or enhanced services).
Directory advertising is an especially lucrative source of revenue for ILECs, though court cases allow other firms to publish telephone directories also. The annual cost of listings in various classified categories is usually billed on a monthly basis in the LEC telephone bill. Yellow Page directories (whether ILEC or competing) include a new genre of online electronic listings and search operations where advertisers and/or telephone customers of a particular carrier may receive preferential treatment.
Directory assistance can be inbound or outbound. Outbound directory assistance is offered on a per call basis to callers within the business by the serving LEC, while inbound directory is offered to callers by their own LEC or competing providers. Special white page listings are offered by the ILEC (which is required to publish a telephone directory) and in competing directories. ILECs are required to list customers of ALECs in the local directory (and provide their listing databases to independent directory publishers), though this is sometimes not done in a timely manner. Furthermore, the ILEC or ALEC is responsible for ensuring that white page listings are available nationally and internationally for inbound callers so that prospective customers can find a business.
Inbound and outbound directory assistance services include white pages, 411 (1411), 555-1212, and Internet directory listings. Improved inbound directory assistance services may be a way that rural Florida can see dramatic gains in trade for call center industries, retailers, and certain wholesalers. Currently, inbound directory services covering rural areas exhibit a gap when compared to urban areas. In particular, rural areas may be poorly classified by an ILEC because billing and service addresses are not the same in rural carrier or P.O. box required areas. The location given in the directory is the exchange location, rather than the post office location, which can make searching for a business harder. Changes in directory listings in rural areas may take longer to be updated in inbound and online directory assistance databases than those in urban areas.
Agribusinesses that expect customers to find them in white or classified listings should check them to see that listings are available and correct. One study found that from fifteen to forty percent of directory assistance operator calls found either incorrect numbers or no number at all when one was listed [Seattle Times, May, 15, 1999]. To save money, instead of purchasing directory databases from the ILEC, ALECs and independent directory providers can purchase listing databases from less reliable sources.
There are five categories of long-distance toll service: extended calling zone calls, intra-LATA long distance, inter-LATA intrastate long-distance, interstate long-distance, and international long-distance. The 1996 TCA prohibits RBOCs (but not other ILECs) from offering inter-LATA, interstate, and international long distance under Section 271 of the 1996 TCA. However, RBOCs may offer extended and enhanced toll calling and intra-LATA long-distance as well as local exchange service.
A business must pre-subscribe to a particular IXC or ALEC, but may use a special code to access another carrier on a call-by-call basis if it has lineside long-distance services. Long-distance trunks (trunkside long-distance services) are groups of lines directly connected to a particular IXC's local POP. Calls over LD trunks can be made only through the trunked carrier. Long-distance dedicated circuits are also available for fixed or measured rather than metered (per minute) rates.
To understand long-distance calling (and to understand how many enhanced telecommunication and data networking services are priced), it is important to understand LATAs or (Local Access Transport Areas). LATAs were created in the MFJ (Modified Final Judgement) that broke up the Bell System to differentiate IXC service areas from those where ILECs and LECs could provide long-distance services.
Figure 4-37 shows the LATAs in BellSouth's service area in the southeast United States.
Calls placed from locations inside a LATA to other locations within the same LATA (but outside of all local calling areas) are intra-LATA long-distance calls, handled only by LECs and ILECs. Most LECs charge for non-local calls inside the subscriber's LATAs by the minute, though some ALECs allow free or per-call charges for intra-LATA long-distance. Calls from one LATA to another (inter-LATA long-distance), can only be handled by IXCs. Inter-LATA calls may be intrastate (within Florida) or interstate (from Florida to another state). Depending on the rate structure and carriers chosen, intrastate calls may be more expensive than interstate calls. In some cases, intra-LATA calls may be the most expensive of all.
When switching carriers, it is necessary to compare a sample of bills over all types of calls to see if savings will be achieved. Agribusinesses are often tempted by low rate quotes on interstate long-distance to switch to a new carrier. Quite often, their long-distance bill actually rises because of higher intrastate or international rates offered by the new carrier. Furthermore, agribusinesses that direct dial international calls must also consider rates to their most frequently called countries.
A business must choose a different pre-subscribed carrier for inter-LATA long-distance (IXC) than it does for local calling (LEC), and may use still a third pre-subscribed carrier for intra-LATA long-distance. It is a good idea to shop carefully. Because of the complexity of calling plans (and due to the fact that rates are constantly changing), businesses can use special adaptive rate dialing equipment to take advantage of the lowest prices to a particular place at a particular time. An important reason that the telephone transport and Internet are converging into a single hypercommunications network is that savings of from fifty to eighty percent on long-distance calling can be easily realized. This point is discussed further in 4.7.4 (voice-data consolidation technologies) and in 4.9.7 (Internet convergent applications).
Figure 4-38 [FPSC, Division of Communications, 2000] shows a map of Florida's 67 counties, 11 LATAs, and 13 area codes (NPAs). Many area codes contain more than one LATA (such as in the panhandle) but in other cases (such as the Southeast LATA), one LATA contains more than one area code.
In Florida, area codes cross LATA boundaries in only two cases. First, the 941 NPA is partly in the Tampa market area (LATA) and partly inside the Fort Myers market area (LATA). Second, most of Polk County is in the Tampa LATA, but it is joined in the new 863 NPA by other counties that are inside the Fort Myers LATA. Hence, inter-LATA long-distance and intra-LATA long-distance calls may be in the same or different area codes.
Long-distance tolls are calculated in several ways, depending on whether an agribusiness has a contract to use a minimum number of minutes per month or the IXC charges all calls on an individual basis (open rate). Generally, the calculation depends on three general rate categories: metered (per minute), measured (rate based on lumpy usage categories), or fixed (unlimited calling, charged monthly). Each category can apply to traffic sensitive (TS) and non-traffic sensitive (NTS) service.
The open rate or contract cost of a single long-distance call may include a setup charge, minimum toll, time-increment tolls, and step charges. A setup charge (typically for the first minute) may apply for every completed call, regardless of the call's duration. Setup charges are fixed, TS fees, added to time-based (measured), step (metered), and minimum tolls.
Time-increment tolls apply for each time interval that a call is in progress. They are for metered services and may be per minute or fraction thereof, per second, or based on other time intervals. Step charges apply to all calls until a minimum per-call length (metering threshold) has been reached. After that point, calls are charged in steps governed by blocks of time. For example, if the step increment time is twenty minutes, calls twenty minutes or less are charged at a single block rate. However, calls longer than the step increment time are charged at a different (usually higher) time-increment toll for the full period the call lasts beyond the step time, so the call becomes a mix of measured and metered rates.
Contracts between an IXC and a business may specify minimum toll rates good on all LD time or fixed minute plans. Lower rates are extended for quantity discounts and for customers who sign agreements not to change carriers for from six months to five years or more. Minimum tolls represent the minimum per minute cost of all calls. Typically, contractually based minimum tolls are contingent on whether the customer places enough calls to qualify for a quantity discount over a certain amount of time. The lowest possible time-based toll is applied to all calls in months when a business uses at least the contractual number of minutes. Higher open rates apply for months when the stated quantity of minutes or hours is not used. Under fixed-minute long-distance plans (especially common in the mobile telephony market), customers agree to pay for a particular amount of long-distance (or local) calling whether they use it or not. In some cases, unused minutes may be "banked" or rolled over for use in future months.
While long-distance is covered here under traditional services, it is important to realize that enhanced telecommunications technologies, CTI integration, and the Internet allow adaptive rate systems as well as the traditional open rate (per minute) and contractual long-distance markets. A business can program specialized telemanagement equipment to take advantage of highly fluid demand conditions to use the lowest cost IXC based on the time of day, day of week, destination, and expected length of a call automatically. Larger businesses can even take advantage of forward markets for long-distance bandwidth between points such as the San Francisco-based Rate Exchange.
Free long-distance calls may be placed by callers over computers (without needing a telephone) via their Internet connection to any PSTN telephone throughout the U.S., Canada, or even internationally using free services such as www.dialpad.com. Free Internet calling is underwritten by sales of advertising banners that the call originator sees while in conversation. Call quality is beneath that of pay long-distance, with numerous QOS issues such as connection establishment delay, latency, jitter, and poor voice quality to contend with as well.
Eventually, it is possible that much long-distance calling will become included in fixed rate plans regardless of destination and call length. However, at present, QOS problems limit so-called free calling (actually paid for through a fixed access charge or by the time cost of viewing advertisements). Importantly, there are regulatory barriers as well (mentioned in Chapter 5) that may prevent free long-distance from becoming widely offered.
WATS (Wide Area Telephone Service) lines include both inbound and outbound long-distance. Inbound WATS includes toll free calls to interstate and intra-LATA 1-800, 1-877, and 1-888 numbers from a calling area that can be a single LATA, an entire state, or nationwide. Outbound WATS is a contractual service offered by an IXC to specific area codes or nationwide and even internationally to specific countries. Tolls may be based on the location called (metered distance rates), per call (metered per minute), on a step rate (measured), or on a fixed charge basis, no matter the length of the call or the location called. Businesses can also earn money from 900 number calls where the calling party pays for the call and an extra charge to obtain information or other services.
FX (Foreign Exchange) and special prefix services are similar to WATS because callers do not pay for FX calls. FX is unlike WATS because the call is made to a local number associated with a local exchange, local NPA, and local NXX. The calling party calls a local number and is automatically connected to the business paying for FX service in the next county or next state. Inbound-only FX routes incoming calls through such "virtual" local FX numbers onto to DID trunks (specific lines inside the business that pays for inbound FX). Many different telephone numbers may be mapped to a single DID line.
FX service is often accompanied by specially targeted white and Yellow Page listings so that the FX business appears to be local to the caller. For example, the local exchange name may be used to show a local address rather than use the distant city name and address where the call is received. Charges are made for the FX telephone number, the DID trunk, white and Yellow Page listings in the FX exchange telephone books, and calls typically are subject to long distance tolls as well.
Outbound FX service allows businesses to call numbers in the virtual exchange as if they were local calls. This can result in savings on long-distance calls that would otherwise be charged a higher rate. Other special prefix services use prefixes such as 203 that do not require callers to dial one plus the area code within extended calling areas or LATAs, or even across area codes. In much of Florida, 203 numbers may be called as a local call in every area code. The business would pay for 203 numbers in each area code along with the appropriate open or contract long-distance rate.
The last item in the list of traditional telephony services includes trunkside services (single and multiple trunks). While multiple telephone lines may be charged at a lower per line rate than single lines, many businesses choose to have special groups of lines called trunks attached to a business telephone system that functions as an on-premise telephone network. Trunks and multiple trunks are groups of lines with special rates that connect to specialized CPE such as PBXs thus allowing businesses to be flexible in the number of incoming, outgoing, and long-distance lines. To cover trunkside services adequately, the next sub-section covers telephone systems, PBXs, and other traditional telephony CPE.
4.6.2 Traditional Telephony Access Technologies and CPE
Traditional telephony access for businesses can include multiple line or trunks (groups of multiple lines). The form of access a business chooses (typically a mix of trunks and multiple analog lines is selected) varies based on the telephone system of the business. The form of access also depends on calling patterns, and whether there is a need for specialized lines, extra telephone numbers, or custom call routing.
A business telephone system may be a traditional key system or a Centrex/PBX. Traditional key telephones (now almost extinct) are simple multiple line telephone systems used primarily in small offices that cannot switch calls elsewhere within the business. Key systems are lower-end individual telephone sets that cannot transfer calls within the business or to other locations. Key systems can handle from two to about 100 telephone ports. There is a large difference in capability between traditional key systems and enhanced (or hybrid) key systems to be discussed in 4.7.
Traditional key systems can recognize up to 20 telephone lines on a telephone set that can be individually programmed to recognize calls from certain lines only. Most traditional key sets have hold and volume controls, but typically do not have any kind of computer brain that the next kind of system, a PBX does. Hence, traditional key systems cannot offer voice mail (except on the telephone set itself) or other services. They are able in some cases to offer caller ID, hold, intercom, and speed dialing, but some of these require payment for each service on a per line basis to the LEC. Since traditional key systems are not switches, as PBXs are, businesses that use them use multiple business telephone lines rather than trunks. A single key set can answer multiple lines and transfer calls by using intercom announcements to alert staff members to pick up a particular line, often using lighted signals on the telephone set. Individual telephones may each be programmed to allow long-distance outbound calling and other features, though some key systems have KSUs (Key System Units) to control available features.
PBX stands for Private Branch Exchange. Historically, PBXs consisted of a local loop telephone trunk (group of telephone lines) ending at the PBX, a switchboard, and the copper lines leading from the PBX to individual telephone sets at a business location. In the late 1960's, electronic switchboards replaced electromagnetic switchboards where calls were manually switched by a human operator. By the 1980's, computers began to be used as the "brains" behind electronic switchboards. Traditional PBXs take advantage of line consolidation so that there can be more lines than telephones, saving businesses money. Furthermore, since many calls are within the business, the PBX is a switch for those calls.
Traditional PBXs use proprietary protocols so that additions to the system require that additional equipment be purchased from a particular manufacturer. Furthermore, many traditional PBXs are no longer supported by the manufacturer so businesses must buy new systems since there is no authorized way to replace equipment. Traditional PBXs, even though they used a computer for switching could handle data only at limited rates up to 19.2 kbps, necessitating the use of separate lines for modems and faxes [Tower, 1999]. With a PBX, a single line telephone set can gain access to a pool of outgoing lines (a shared trunk) through users dialing an access code such as 8 or 9 before obtaining dial tone.
Even traditional PBXs support numerous system services such as DID (Direct Inward Dialing), hunt groups, pick-up groups, call detail services, and least call routing for toll calls. DID allows each individual in the business to have his own telephone number without requiring an equal number of telephone lines. The PBX switches the DID number to the appropriate station. Hunt groups allow the PBX to automatically route incoming calls to the first free telephone line in a particular group without a human operator to connect the call. Pick-up groups permit members of a group to answer calls (or have calls forwarded) when a particular telephone is unattended, again without action by a human operator. Call detail services allow logs (even on a per station basis) to be automatically recorded to keep track of long-distance charges, call lengths, and other call management details. More advanced PBXs work with digital and analog trunks that can be dedicated local ISDN-PRIs connected to an ALEC's POP or dedicated long-distance ISDN-PRIs connected to an IXC's POP. These uses are covered in section 4.7.
PBX stations (individual phones) can be programmed from the central computer to allow call waiting, call forwarding, call transfers, speed dialing, voice mail, do not disturb, and automatic callback of busy or no answer numbers. Changes in extension numbers, ringing, hunt groups, and other features are done through the PBX computer rather than by programming individual telephones or rewiring connections. Outbound long-distance and other toll calls can be blocked through the PBX computer for any telephone set or group of telephones in one step. A central PBX or a distributed network of PBXs can be used to link separate buildings or floors. Later versions of PBXs are covered in 4.7 with enhanced services.
Centrex (also known as ESSX(R) by BellSouth) is a virtual PBX controlled from the telephone company CO. With Centrex service, most traditional PBX services are available for a monthly service charge from the LEC. Centrex service involves leasing a PBX (and often telephone sets) from the ILEC, rather than a PBX that the business owns. Changes to the system must be ordered through the telco (ILEC or ALEC) with separate charges for hold, music on hold, do not disturb, distinctive ringing, and other options that are standard on a CPE PBX.
Some argue that Centrex is gradually becoming a technological legacy of the pre-deregulation era, as even small businesses find that new generation PBXs support special features along with enhanced telecommunications services more affordably. However, Centrex offers higher reliability (designed to be down less than three hours over 40 years) than an in-office PBX, is impossible to outgrow, and does not require the investment cost of a PBX.
4.7 The Enhanced Telecommunications Market
As convergence occurs, the overlap among the sub-markets named in sections 4.6 through 4.9 will become ever more substantial. Nowhere is this more the case than when considering how the enhanced telecommunications market overlaps traditional telephony. The two overlap in two ways. First, many enhanced telecommunications services are simply advancements in business telephone equipment such as new kinds of PBXs. These advancements in CPE are summarized in 4.7.1.
The second area of overlap comes because most enhanced telecommunications services are supported by the ILEC copper (or sometimes fiber) infrastructure. Hence, all enhanced telecommunications services have to do with voice communications, though many of the individual offerings also have private data networking and/or Internet applications. Though cablecos and electric utilities have fiber or coax infrastructures that can support enhanced services as well, they are new entrants in a sub-market dominated by telcos. While terrestrial wireless carriers have established presences in paging and mobile wireless telephony, terrestrial fixed carriers (such as WLL and LMDS) and satellite technologies can also (or soon will) support enhanced telecommunications services outside of the mobile market.
In some areas of Florida, ILECs are the only vendor agribusinesses can buy enhanced services from, while in other areas ALECs resell them. In still other areas, even the ILEC is not able (for engineering or marketing reasons) to offer enhanced services. In some cases, ALECs are able to resell unbundled ILEC services when the ILEC cannot profitably introduce service [Ackerman, 1999]. However, some rural areas of Florida may wait years to get particular enhanced services considered vital by urban business.
The enhanced telecommunication sub-market also overlaps both the private data networking and Internet markets. Some of the access services (both dedicated and circuit-switched) mentioned in this section are targeted towards the enhanced telecommunications market, though they can be used as well for data networking or to obtain Internet access. Similarly, certain WAN data networking and Internet offerings can be used for voice though their attention is focused upon the data and WWW side.
There is a subtle evolutionary chain leading from POTS up to enhanced services and then on to both data networking and the Internet that can only be seen through a sequential presentation. The evolutionary stages depend on the needs and sophistication of the agribusiness users. Agribusinesses must adopt POTS before they become customers for enhanced telecommunications. Additionally, private data networking adoption follows enhanced telecommunications as communications sophistication develops with each step beyond POTS the agribusiness takes. Due to the connection between traditional modems and the Internet, its evolution is harder to see because it is associated with POTS and data networking, though enhanced services may be used as combination voice and Internet access loops.
This evolutionary chain is economic as well as technical because as costs fall, choices rise, while reliability varies as communications use progresses from POTS upward. After the sequence of sub-markets has been laid out, the path to convergence is more easily seen by glancing backwards down the evolutionary chain. Furthermore while the overlap exists, the enhanced telecommunication sub-market focuses on common carrier communications, while the data networking sub-market focuses on private connections. Since it is a public data network, the Internet shares some points of similarity with both enhanced telecommunications and data networking. Additionally, the Internet is able to combine features of interpersonal and mass communications to offer lower prices, greater QOS complexity, and the enormous business potential of hypercommunications convergence.
The continuing existence of these separate sub-markets depends importantly on what is an increasingly artificial distinction between voice and data. Enhanced telecommunications offerings put voice first and data second, while private data networking puts data first and (if the network manager looks at voice) it may be put third, behind Internet. However, recent developments in enhanced telecommunications offer a way for agribusinesses to migrate from separate voice and data networks to hypercommunications convergence.
It is important to realize again that line consolidation means that a business will have more telephones that telephone lines. It may choose to have more telephone numbers than telephone lines or telephones as well so that employees may have their own private number without requiring their own private line or for other business reasons. The main idea behind advanced telecommunications services is that line consolidation allows businesses with multiple telephone lines to purchase trunks rather than lines, leading to significant savings on monthly telephone bills. Some trunks can also be used to access private data networks such as WANs or the Internet while carrying telephone calls. Separate trunks may be needed for long distance, WATS, DID, FX, data, and Internet depending on business needs.
Table 4-17 concentrates on voice only or combination voice-data offerings as covered in this section. Some of the services in Table 4-17 will reappear in slightly different contexts in the private data networking section (4.8) and the Internet section (4.9).
Since BellSouth and other ILECs offer thousands of specific services, this section can only touch generally on a few of the more important enhanced services available to agribusinesses. Importantly, the cost of equipment and services has become more competitive in the new de-regulated environment so that even smaller businesses can now afford enhanced services.
4.7.1 Enhanced Telecommunications CPE
A variety of CPE is available to support enhanced telecommunications services. Table 4-18 shows some of the most important examples. Enhanced telecommunications CPE ranges from PBXs that replace traditional PBXs to DTE and DCE needed to use dedicated or circuit-switched services to fully convergent IP PBXs that merge data and voice together throughout the business.
Because of the sheer volume of new technology, Table 4-18 is incomplete and some entries that have been described elsewhere get no additional treatment here. Since many enhanced services still have a voice focus, the most important piece of equipment is the PBX. As was mentioned in traditional telephony, a PBX is a collection of cards, computers, wiring, and other hardware and software that control switching of telephone calls within a business. Even traditional key telephone systems have become hybrid key-PBXs, able to perform many of the functions that used to require a PBX.
Two new types of PBXs have been developed to support different focuses of enhanced telecommunications services. Before they are covered, it may help to see how the traditional PBX has developed into a LAN-based PBX such as the one shown in Figure 4-39. From the telephone network, trunk lines (switched T-1 or ISDN for local service) and possibly a dedicated LD T-1 or ISDN-PRI trunk lead into the business to a PBX switch, which in turn is attached to the LAN. A fax server and voicemail server can be directly connected to (or part of) the PBX, just as individual telephone sets are directly connected the PBX only. IVR hardware (Interactive Voice Response), predictive dialing equipment, and an e-mail gateway connect to both the PBX and LAN in order to enable CTI applications.
Note that an ISDN-PRI, T-1, or T-3 connection to the PSTN (telephone network) is assumed in Figure 4-39. All voice and data communications flow through the telephone network, while data transfers are accomplished by modem only. Data rates over modem in such a configuration are typically limited to 19.2 kbps because the PBX performs analog and digital conversion slowly.
LAN-based PBXs such as the one in Figure 4-39 use parallel activity architecture [ECTF, 1997, p. 15] Also called third-party connection architecture, parallel activity LAN PBXs switch only telephone traffic to terminal devices (DTE). As calls come in to an order or call center in the sales or customer service department, individual computer stations can provide information about callers and open order screens because of equipment that synchronizes telephone calls with computer ordering applications. Calls are answered by an automated attendant (a recording) and an ACD (Automated Call Distributor) routes them to the first available operator. Callers enter the extension or speak the name they want to reach. Often, another part of this system (called more generally IVR for Integrated Voice Response) asks callers for their account number or gives them certain menu options such as the ability to check balances, order status, etc.
The IVR ACD system may have options asking what department or extension the caller needs as well. If callers know a specific extension, they may dial that extension directly, or automatically reach voice mail or reach a coworker's telephone in a hunt group if the extension is unanswered. Incoming order or information calls can be transferred automatically to a human operator screened for further information by the IVR before transfer. Often, the caller follows a (possibly long) sequence of menu choices or must enter account numbers or other information through the IVR. However, when callers leave the IVR system (by requesting a transfer to a "real person") the information entered through the IVR is not on the computer screen of the "real person" who gets the call. This is because there is no API (Application Programming Interface) to send a record of the IVR session to the answering operator.
More recent LAN-based PBXs have a telephony server-PBX link, with client computers linked to the PBX (via the LAN) and individual telephones linked to the PBX via traditional lines as Figure 4-40 [ECTF, 1997, p. 15] shows. While the server provides the brains for the telephone system and allows some CTI interaction with individual client computers, individual computers are still not directly linked to individual telephones. The PBX does not carry much (if any) data or Internet traffic, since separates circuits (or sets of circuits) are used for data, Internet, and voice.
The ECTF-compatible system shown in Figure 4-40 assumes businesses maintain separate voice and data circuits, but allows them to keep their existing PBX (if it is compatible with ECTF standards). IVR (and other operations) are better synchronized with telephone callers' actions. Incoming PSTN calls come through the PBX where they are transferred to a particular telephone extension or are sent to the telephony server. APIs (Application Program Interfaces) are used to interconnect various kinds of telephones, computers, and servers.
Microsoft's TAPI (Telephony Application Programming Interface) and Novell's TSAPI (Telephony Services Application Programming Interface) are two common de facto API standards. TAPI provides first-party connections (individual telephones are hardwired to an associated PC) while TSAPI provides third party connections (any telephone may be associated with any PC through parallel activity tracking) [Bezar, 1995]. Third party connections have more flexibility since each user has access to the PBX through the LAN.
Previously, such systems were available only to large businesses with mainframe computers or elaborate telephone networks, but new LAN-based PBXs are within the reach of smaller businesses and can be tailored to specific needs. Instead of being forced to use complicated proprietary hardware and software that operate only with equipment purchased from a single vendor, the ECTF standard lets businesses choose from many software manufacturers and PBX makers when they install and upgrade systems.
Newer LAN-based PBXs may be used with circuit-switched or dedicated circuits (such as the T-1 or ISDN-PRI connections as shown) so that call center and advanced telecommunication services are available to all suitably equipped stations [Bezar, 1995]. These services (detailed further in 4.7.2) include caller ID, voicemail, predictive outbound dialing, screen pop inbound call reception, and other features.
Another advantage of newer LAN-based PBXs (such as the ECTF standard shown in Figure 4-40) is that the IVR and other equipment are directly on the LAN so that caller information is more easily seen by employees. Hence, employee time is saved because information that customers have already provided does not have to be reentered. Customers may also be less annoyed, not having to give information twice. Another advantage of LAN-based PBXs is that changes to the telephone system, billing information, and other telemanagement functions can be available to managers throughout the agribusiness without having to be at a particular location. LAN-based PBXs are an intermediate step between traditional PBXs and fully converged CT or IP PBXs.
One step towards a converged voice-data network is to get rid of the PBX. Figure 4-41 shows a telephony server directly attached to the telephone network without a PBX [ECTF, 1997, p. 14].
This system is perfect for businesses wanting PBX CTI functionality without having to buy an expensive PBX that may become obsolete in just a few years. The ECTF telephony server standard allows software upgrades in the server to support new services or changes in circuits. In this way, the business has more flexibility than it would with a closed, proprietary PBX since PBXs are compatible only with certain telephones, software, hardware, and particular kinds of signaling for access connections to the telephone network, etc. This solution is ideal for a start-up business, for a business that has experienced such rapid growth that it has outgrown its PBX, or for a business that has never used a PBX.
Note from Figure 4-41 that the telephony server has three essential elements: control, switch, and media. In such a distributed architecture, the application servers function as clients of the telephony server The telephony server controls calls and applications that process calls, making the call rather than the application the center of interest. The server is able to switch traffic within the business and may serve as a data switch (able to function as a data edge device as well). The telephony server is able to store, buffer, and switch various message types (media) ranging from telephone calls, voice mail, faxes, e-mail, and video conferencing.
Another kind of PBX (which is not actually a PBX at all, but an all-in-one communications server) is the CT or IP PBX (Figure 4-42). These PBXs also called integrated communications servers, handle more than the telephone switchboard traffic of the telephony server. Voice, fax, video, Internet, and data travel through the same conduit in the business (there is no difference between LAN and telephone wire) and over the same access connection. The communications server is an edge device, router, switch, and other DCE in one unit composed of several modules, such as the applications module shown in the figure.
The CT or IP PBX can represent a tremendous savings in wiring and equipment costs, and allows the agribusiness to pay for only one connection rather than pay for voice, WAN, and Internet access separately. The agribusiness need not have a separate IVR, e-mail gateway, video server, predictive dialer, and voice mail and fax server. Instead these functions are performed by applications on the communications server or hosted elsewhere by an Application Service Provider and accessed via the Internet.
The communication server approach uses telephony functions embedded into client computers equipped with voice-data consolidation technologies (4.5.4). Since the telephone operates as a unit with the computer, the complexities of dealing with first-party (standalone) connections and third-party (networked parallel activity) connections are in the past. The DTE client is on a LAN governed by a communications server that controls all incoming, outgoing, and internal voice, data, and Internet traffic. Existing LAN host computers and database servers continue as before, but the LAN host has no communications traffic weighing it down.
CT and IT communication servers are converged voice-data networks with full CTI and unified messaging capabilities. Unified messaging offers the ability to control e-mail, voice mail, and faxes through web browsers, PCS and wireless messaging devices, office telephones, and telephone-computer stations [Riggs, 1999]. Not only can CT and IP PBXs be used with circuit-switched (4.7.4) and dedicated circuits (4.7.3), they may be used over packet-switched (4.8.2) or cell-switched (4.8.3) networks, and even in conjunction with Internet (4.9) connections. The difference between CT and IP PBXs is mainly that IP PBXs can place and receive calls through the Internet, Intranets, extranets, and the PSTN. Users may click on an icon to place telephone calls, get voice mail, and screen both PSTN and IP telephone calls.
Of particular importance, home-based cybercommuters or traveling employees may receive calls made to their office number forwarded anywhere in the world via Internet or Intranet connections, completely avoiding long-distance charges. IP communication server market growth will be driven by long-distance savings of up to 90%. However, even with superior telemanagement, and lower administrative costs, the promise of communication servers depends on whether manufacturers can launch reliable and user-friendly applications [Frost and Sullivan, 2000].
The new convergent PBXs reduce accounting and time costs of businesses in many ways. Employees need not check for voice mail, e-mail, fax traffic, and internal office memos in four separate systems, everything is in one mailbox. Secondly, access to these four kinds of message traffic does not require that any particular device be used or even an employee's presence in the office. Third, the fixed costs of purchasing separate DTE (computers and telephones), separate DCE (data switches, PBXs, Internet routers), expensive stand-alone servers (such as IVR, e-mail gateways, voice mail servers, etc) fall, replaced only by an investment in a communications server. Finally, instead of paying for separate voice, data, and Internet connections each month, agribusinesses have one monthly communications pipeline charge.
To complete the discussion, a return to Table 4-18 shows that the rest of the list would be unnecessary if all businesses have communications servers. However, the communication and telephony server models are hardly available or appropriate for most agribusinesses at this stage of convergence. The rest of the entries in the table include CPE that is still being purchased and probably will be for some time.
DSU/CSUs (Data Service Unit and Channel Service Unit), NICs and NTs are edge devices (DCE) used to terminate dedicated or switched circuits at the demarcation point on the customer premises. Typically, CPE are purchased by the agribusiness or leased from the carrier. Often, CSU/DSU functions are integrated into routers so that convergence is simplified. Other DCE include such items as multiplexors, demultiplexors, transcoders, channel banks, routers, gateways, and repeaters. Many devices may only be used with a particular service, carrier, or technology.
Multiplexors allow agribusinesses to change service channeling, often allowing a more efficient use of bandwidth. For example, a T-1 carries 24 64 kbps voice channels. To carry them over a T-1 carrier, a D4 channel bank is needed to split the DS-1 signal into 24 separate channels. In conjunction with other technologies, multiplexors can lead to greater efficiencies. For instance, an agribusiness could double that number to 48 voice channels using a different voice compression technology than standard PCM in telephone sets and multiplex these 48 channels onto the T-1. A demultiplexor would be needed to reverse the process. Other devices such as DACS allow customers to automatically reconfigure how ISDN-PRI or T-1 trunk channels are allocated so as to allow variations in traffic to accommodate video conferencing, Internet access, or other traffic needing more than one channel.
Digital telephones and other DTE are the devices that interact with users. Digital telephones allow integration of telephone calls with other applications and better call processing, making it easier to migrate towards a unified voice and data network. Digital telephones are DTE and DCE together. Digital phones perform ADC inside the telephone set, directly converting the message from an analog microphone (when the local party speaks) into a digital signal that is sent over an end-to-end digital path. Other "digital" telephones are analog devices that rely on intermediate DCE to perform conversions needed for them to operate over digital PBXs or digital connections. The ultimate digital telephone is a PC or thin client with voice capability.
Fax servers automate and manage inbound and outbound fax traffic. Faxes may be sent or received through the PSTN, corporate data network (WAN or Intranet), and the Internet. Fax servers store incoming and outgoing faxes when the network is congested until conditions allow the faxes to be sent. Faxes are converted into e-mails or e-mails converted into faxes and sent to any PSTN telephone with some fax servers.
Other fixed wireless CPE includes DCE and DTE needed to establish wireless access paths. In addition to DCE and DTE, antennas are needed for wireless communications. Most PBXs are not yet compatible with fixed wireless access to the PSTN. However, carriers offering fixed wireless technologies such as WLL, DEMS, MMDS, LMDS, and broadband LEO are expected to provide access to the PSTN with the necessary CPE, as well as data networking and Internet access over the same wireless path. The wireless industry is developing the necessary CPE to offer support of enhanced services. More information may be found in 4.8.5 (Wireless WANs) and 4.4. Mobile wireless CPE includes combination DTE/DCE devices for mobile users. Services supported include cellular, PCS, and paging or wireless messaging. More information on mobile wireless CPE is given in 4.7.6.
4.7.2 AIN CO Technologies, Call Center Services
This section briefly covers the enhanced telephony carrier services and technologies necessary to support call centers, CTI, and the various levels of business PBX and applications. AIN (Advanced Intelligent Network) CO technologies make possible a variety of enhanced services ranging from caller ID and call waiting to ISDN, DSL, and dedicated digital circuits. It may help to begin by defining call centers and relating them to agribusiness.
Call centers are big business in America. Estimates are that over 60,000 call centers existed in 1998, employing some 3.5 million people, and are responsible for as much as forty percent of all telephone calls. Over seventy percent of customer-business interactions and $840 billion in sales went through US call centers in 1998 [Bernett and Gharakhanian, 1999, p. 107]. In the business-to-business (B2B) arena, call centers accounted for $244 billion or 45 percent of B2B sales in 1996 [Sevcik and Forbath, 1999, p. 2]. Telemarketing, inbound catalog ordering, voice interactive websites, and video interactive websites are some examples of call center services.
Call centers previously required a certain scale and type of business to justify their expense. Until very recently, telemarketing firms, catalog sales companies, and large customer service departments at banks, airlines, cablecos, and telcos have been the largest users of call center technologies. Indeed, these kinds of large retail businesses are starting to use increasingly sophisticated call center services at the high end of this market. However, the cost of software and equipment has fallen so that even small businesses can take advantage of the potential time and cost savings as well as the opportunity to serve customers better and faster than before.
To agribusinesses, call centers can mean the ability to avoid hiring extra employees to cover occasional busy spurts in average weeks or busy seasons. Call center services allow scarce personnel to avoid answering calls over and over regarding business hours, location, account balance, status of shipments, availability of sale merchandise or inventory. Phone system programming can provide premium customer service to large customers and attractive prospects, transfer delinquent accounts to collections, track employee telephone productivity, and display a customer's name and account information via computer the moment he calls (ScreenPop). Call centers can be particularly valuable in small businesses where employees are strapped to keep up with the workload, especially at peak hours. Customers also have access to account information, order status, hours, special offers and other information twenty-four hours a day, 365 days a year.
Table 4-19 shows some of the most important AIN CO technologies along with the call center services and PBX features those technologies support. Not all areas of Florida have access to these technologies because their availability depends on whether the ILEC has equipped the local exchange to support them. However, deployment does not depend solely on the ILEC as cablecos, electric utilities, ALECs, and wireless carriers are often able to provide enhanced services at a far lower cost than the ILEC can.
Savings of fifty percent per month on traditional services with many enhanced services available for free can result in telephone bills up to seventy percent below ILEC carriers for businesses who replace multi-line or Centrex systems with their own PBX. Additionally, the cost of a live telephone transaction (from $25-$35) can be halved using enhanced services-based call centers and cut to $3-$5 on web-based call centers [Sevcik and Forbath, 1999]. However, an agribusiness may need to have eight to twelve telephone lines before those savings can be achieved. Furthermore, some customers are likely to resent systems that make it too hard to reach a "real person", while others will prefer the convenience of not having to wait and the ability to call at any hour.
What follows is a thumbnail sketch of the technologies in Table 4-19. AIN is a term that applies to technologies used in the ILEC local exchange, CO switch, and transport network, including ALEC and cableco switch networks. Network intelligence is distributed under AIN. Therefore, new services can be quickly introduced, service customization is made easier, vendor independence and competition are aided, and open software and hardware interfaces are created [Telcordia, 2000]. AIN means these facilities are equipped with the software and hardware needed to support enhanced telecommunications services, LAN-based or CTI IP PBXs, and call center services.
SS7 is a protocol that enables advanced telephony features (see 4.5.2) through the access and transport levels. ACD and IVR systems were discussed in 4.7.1, as were APIs. Voice synthesis technologies perform voice-to-text and text-to-voice conversions when voice mail must be converted to e-mail, text e-mail converted to voice mail, or allows callers to speak responses to IVR or ACD prompts.
PBX system services have global inbound and outbound capabilities, meaning that they are based on the technology of the access trunks and ALEs (Access Line Equivalents). System services make it easier to perform telemanagement (manage costs, numbers, stations, and users). DID and FX DID are virtual telephone lines that are routed to specific extensions or departments within a business. Hunt groups transfer calls from unanswered or busy telephone lines to the telephone set of the next available group member. CDR (Call Detail Records) and SMDR (Station Message Detail Records) record call information such as length, cost, and extension so telemanagement tracking and fraud prevention can be performed.
Telemanagement reports can be prepared automatically to show long-distance charges, traffic to and from certain numbers or extensions, time on telephone per customer, etc. Least call long-distance routing allows the cheapest long distance carrier at a particular time for a particular call to be selected automatically. Pickup groups are sets of extensions that may be answered by anyone in a certain area. Predictive dialing is an outbound telemarketing service that allows computers to dial calls automatically and connect them to an operator once the call is answered by a human being.
PBX station services are advanced telephony features compatible with the PBX. These include such things as: caller ID, distinctive ringing, DID, FX, conference calling, call blocking, call waiting, call transfer, do not disturb, speed dialing, three-way and conference calling. When the services are purchased on a per trunk basis instead of a per line basis, costs can drop dramatically. Many ALECs and cablecos offer advanced telephony features at no additional charge to local customers in an attempt to woo customers from ILECs. The trunks used by modern PBXs may use dedicated and/or circuit-switched circuits to access the PSTN. The next sections cover access or the OSI physical layer used by enhanced telecommunications services.
4.7.3 Dedicated Circuits
Digital T-1 and ISDN are often both thought of as services and technologies. In reality, they both can function as physical layer (as defined in the OSI model, Figure 3-6) carrier connections that allow agribusinesses access to the PSTN, Internet, or their own private data network. Another dedicated circuit technology, SONET, is a special case since it can be more than simply a physical carrier. As will be shown in 4.7.4, ISDN has such a broad definition that it can be hard to pin down an exact meaning except in context. The main difference between the dedicated circuits covered in this sub-section and the circuit-switched circuits of 4.7.4 (such as ISDN) is that dedicated circuits are not circuit-switched through telco voice switches.
The dedicated circuits shown in Table 4-20 have several things in common with respect to enhanced telecommunications. First, dedicated circuits are leased from an ILEC or ALEC for the sole use of an agribusiness and are available around the clock 365 days each year. Second, they are point-to-point circuits that travel between agribusiness locations (or from an agribusiness to a provider's POP). Typically, dedicated circuits use telco access and transport networks, but do not travel through CO switches.
Hence, in addition to avoiding congestion with PSTN traffic, dedicated circuits avoid the chance of poor connections and other vagaries connected with POTS switches. Most dedicated circuits are "always-on" so there is no connection establishment delay or connection establishment failure. Other QOS variables can be controlled better by carriers since dedicated circuits are engineered to be resilient enough that failure probabilities are extremely low. Charges are a flat monthly fee based on constant use of the entire circuit capacity, whether it is actually used or not. Dedicated circuits that cross LATA boundaries are more costly than inter-LATA dedicated circuits if an ILEC provides them.
There can be significant costs if an agribusiness switches from one carrier to another (but keeps the same type of circuit) or if it switches from one dedicated circuit to another. Such switching costs occur because contracts typically offer the lowest rates the longer the agribusiness is obligated. One to five year terms with a particular carrier at a set price are common. Even if prices fall or new lower-priced carriers begin to serve an area, the agribusiness is obligated by the contract. Furthermore, in many cases, the CPE purchased for use with one carrier's identical service offering will be incompatible with that of another carrier for technical reasons or because carriers make alliances with CPE manufacturers.
Changing from one type of dedicated service to another or adding to existing service (because of business expansion or new needs, etc.) can mean costly rewiring and installation charges and the purchase of new CPE. Service switching costs can even include installation charges for access loops and DCE in the loop such as repeaters or amplifiers, a fact remotely located businesses are already aware of. Thus, networks must be planned carefully with dedicated circuits because capacities must be correctly estimated.
|Circuit||Traffic||Voice capacity||Bandwidth (Code)||Data capacity|
|Dedicated analog voice grade line||Analog voice or modem data||Non-switched leased line from one point to another||4 kHz (NA)||Modem capacity up to 33.4/56 kbps|
|DS-0 dedicated digital line or lines (fractional T)||Voice trunk, voice-data mix possible||Varies by compression, overhead. Typically, 1 digital circuit (voice or data) per 56-64 kbps||8-40 kHz (PCM)||56-64 kbps per circuit, symmetric|
|DS-1, T-1 digital carrier||Voice trunk, Internet, data||24 channels @ 56 or 64 kbps per digital line||1.544 MHz (AMI) 2.316 MHz (B8ZS)||1.536 Mbps, symmetric|
|DS-1, T-1 (HDSL feeder)||Voice trunk or voice-data mix (Smart T)||Fewer than 24 channels when shared with data||420 kHz (2B1Q)||1.536 Mbps, symmetric|
|DS-3, T-3||Multiple voice trunks or voice-data mix||672 voice channels or mix of voice, data||67.145 MHz (B3ZS)||44.736 Mbps, symmetric|
|DSL||Voice-data mix or data only||0 to many, depending on variety. (See Table 4-21)||1110 kHz (ADSL)||Differs by DSL variety|
|Cableco modem & phone||Internet, enhanced telephony, VPN||1-3 lines, new technologies may permit more||360 MHz (QAM, QPSK)||30 Mbps downstream (shared), 768 kbps upstream|
|SONET self-healing ring||Fiber optic data & multiple voice trunks||Varies by OC level & compression: 1,000 (OC-1) to 150,000 (OC-192)||To 1 GHz (WDM, DWDM)||Varies by to OC level, OC-1 51.84 Mbps|
|Wireless dedicated Ts||Voice, data, or mix||Varies by service, carrier, and distance||Varies||Varies|
Sources: Tower, 1999; FitzGerald and Dennis, 1999, Hill Associates, 1998.
Analog voice grade dedicated circuits (the first item in Table 4-20) are traditional POTS lines that carry analog voice or data via modem from one point to another over a non-switched route dedicated to the subscriber. Analog leased lines are charged monthly on a per line basis with one voice grade line the usual unit. They may be used for voice intercoms, telephones, or to connect computers at two locations using ordinary modems. Dedicated analog voice grade circuits are also used to connect agribusinesses with ISPs for dedicated Internet connections, links for POS devices, or for remote sensing. The circuit may be open at all times or require connection establishment depending on CPE. Channel bonding is a technology that may be used to combine the bandwidth of two (or more) dedicated analog circuits to create faster data rates and better throughput. However, each end of the connection must have a channel bonding capable modem.
The remaining dedicated circuits are digital. The DS (Digital Signal) hierarchy governs how the next four dedicated circuits in Table 4-20, DS-0, fractional T, T-1 (DS-1), and T-3 (DS-3) are sold. The first DS hierarchy, DS-0 or fractional T circuits, are sold in increments of 64 kbps by ILECs and ALECs. DDS (Digital Data Service) is the name of telco offerings for single 64 kbps circuits. Voice, data, or a voice-data mix may be carried on a single DDS point-to-point line. However, DDS (as the name implies) circuits normally carry data networking or Internet traffic. A CSU/DSU is needed at the customer premises.
T-1 and T-3 dedicated circuits are called T carriers to emphasize the fact that, technically, they are technologies used to carry services. Services such as ISDN-PRI, frame relay, SMDS, ATM, and the Internet may be carried via Ts. However, they are also sold as point-to-point connections for other services such as links in private data networks or for voice telephony use. Typically, voice Ts are switched ISDN-PRI circuits (covered in 4.7.4), but it can often be hard (and sometimes unnecessary) to distinguish T-1s from the voice or data services they carry.
T-1s previously were so expensive that only large corporations could afford them. Hence, fractional Ts (groups of 64 kbps channels such as 128 kbps, 256 kbps, etc.) were introduced for the small business market. Now prices are as low as $100 per month for fractional Ts and even full T-1s are priced from $500 to $1200 per month. Therefore, smaller firms can often save the full monthly cost of the T when they switch from multiple single lines to fractional T-1 or T-1 trunks.
Some IXCs offer dedicated fractional Ts from the agribusiness to the long-distance POP for as few as eight lines, avoiding local service line charges as a full T-1 does, but for much smaller businesses. With a long-distance fractional or full T, long-distance calls are routed directly to the long-distance carrier without having to pay for local loop charges on these lines. Fractional Ts can be used to connect frame relay equipment to other locations in the company WAN or to ISPs. Typically, only one telephone conversation per 56-64 kbps channel can be carried. However, if a voice compression technology other than standard PCM is used, more conversations can be handled in a single channel. Upgrades to full T-1 leased circuits usually involve changing edge devices, but are not likely to require installation fees from the ILEC or ALEC if fractional service has been established.
T-1s transmit point-to-point DS-1 (1.544 Mbps) signals (which carry voice or data or combination traffic) from one point to another. Typically, separate T-1 carrier lines are used for local telephony trunk lines, intra-LATA long-distance trunk lines, inter-LATA long-distance trunk lines, and DID or DID FX trunk lines. Since T-1s feature in-band signaling, each T-1 trunk has 24 channels for individual telephone conversations. Voice-only T-1s are access level connections to local telephone switches of ILECs or ALECs, or to long-distance IXC POPs. Data-only T-1s serve as dedicated links in WANs or as dedicated links to ISPs from an agribusiness location with data rates of 1.536 Mbps. Some Ts (smart Ts) can be used to combine various voice, data, and Internet services in one T. Local T-1 service is available in some areas from cablecos, wireless carriers, and directly on optical fiber networks of IXCs or ALECs. Voice T-1 service differs from ISDN-PRI voice service (carried at T-1 speeds) because switching is not done through the ILEC's voice switch. Often, what is sold as a voice T-1 is technically an ISDN-PRI.
There are several T-1 carrier technologies: AMI/B8ZS, 2B1Q/HDSL T-1, and CAP/HDSL T-1. Before considering what these acronyms stand for and why the type of T-1 carrier technology is important to agribusinesses, realize there are differences that go beyond the three varieties shown in Table 4-20. T carriers differ according to framing, signaling, timing, and whether clear channel capability and customer reconfiguration control (smart Ts) are offered.
Framing concerns the order in which bits and overhead information (signaling) are sent [Hill Associates, 1998, p. 303.2.6]. Framing reduces the usable part of a T-1 circuit to 1.536 Mbps. Two different kinds of framing, SF (Super Frame) and ESF (Extended Superframe) are available. ESF allows both the agribusiness and the telephone company a superior ability to diagnose problems with T-1 lines that otherwise can go undiagnosed for weeks.
The kind of signaling technology affects the circuit's capacity and may determine whether CPE will be compatible with the carrier's hardware and software. Out-of-band or ZCS (Zero Code Suppression) signaling used is with ISDN-PRI (4.7.4), so one of the 24 channels in those circuits carries information about the status and operation of each of the 23 remaining user channels. In-band T-1 signaling lets all 24 voice channels be available to subscribers. Robbed bit in-band signaling (used with voice Ts) allows signaling information to travel over the same channels voice calls do by "robbing bits" from the voice call without affecting call quality [Hill Associates, 1998, p. 303.2.11]. Dial tone, ringing, caller ID, and toll and billing data are examples of voice signaling data.
Timing technologies synchronize bits, maintaining sending and receiving flow control. These background technologies matter to agribusinesses because they determine the distance from CO (or RAV) to customer premises that a T-1 circuit is capable of reaching in addition to the cost of providing the line.
Each T-1 technology has other important differences such as differences in line coding or modulation. The next two figures show the frequency ranges of four different T-1 technologies (AMI, B8ZS, 2B1Q HDSL, and CAP T-1 HDSL). Since each technology has different bandwidth requirements, their ranges vary. Figure 4-43 shows the first part of this relationship, the bandwidth requirement.
The first T-1 technology, AMI (Alternate Mark Inversion) Ts require expensive repeaters to be spaced 2000- 6000 feet apart over the full length of the loop. Additionally, AMI lines are so noisy that only one may be carried per bundle of 50-100 telephone lines to any one area, making installation difficult in areas where multiple T-1s are to be deployed. B8ZS (Bipolar with Eight-Zero Substitution) supplanted AMI as the most popular line code for T-1 transmission in many areas by the 1990's. B8ZS is AMI with ZBTSI (Zero Byte Time Series Interchange), a method that provides a clearer channel and better timing than AMI. However, standard AMI requires 1.544 MHz of bandwidth (and B8ZS needs 2.316 MHz) which can be a problem because attenuation is greater at higher frequencies.
As can be seen in Figure 4-44 [Adapted from Paradyne, 1999, p. 15], B8ZS improved on AMI's range without repeaters. While AMI could reach 6,000 feet, B8ZS AMI could go to just past 13,000 feet. However, note that 2B1Q and CAP (as used in less-costly DSL) transmit 1.536 Mbps for 12,000 and 18,000 feet (respectively) without expensive repeaters or special DCE, as are required with B8ZS. This is primarily due to their efficiency in bandwidth use (as shown in Figure 4-43).
Repeaterless HDSL feeder T-1s such as 2B1Q and CAP are better able to support smart T technology, and less expensive to deploy because repeaters are not needed within 12,000-18,000 feet of the CO. Over 70% of digital Ts now deployed are actually another type of dedicated circuit with superior range, a form of DSL called HDSL [Orckit, 2000] to be covered after T-3 dedicated circuits are mentioned.
T-3 carrier circuits have 28 times the capacity of T-1s. They are generally less expensive than buying from 10 to 15 T-1s separately. T-3 dedicated circuits operate in much the same way as T-1s, though they are mainly deployed over fiber optic lines rather than quad copper unless service is to be provided near the CO or special high-powered repeaters are used. The only way around installation of fiber optic cable for T-3 connections is via a point-to-point microwave path.
Because of the size and capacity of T-3s, there are few agribusinesses with the communications scale needed to demand a full T-3. However, an agribusiness that hosts its own website, owns over a hundred computers (requiring Internet and data network access), and requires as few as one hundred telephone lines could find that it would save money with a converged T-3 connection. By purchasing a single connection, the business would save compared to the cost of paying for separate local trunks, long-distance trunks, and Internet and data connections. Fractional T-3 carriers are also offered.
T-3s with customer reconfiguration control (smart Ts) are perhaps the only current possibility for most agribusinesses other than fractional T-3 connections. A voice-only full T-3 carries 672 voice channels, far beyond the needs of all but large corporations. DCSs (Digital Cross-connect Systems) allow multiple services (various voice trunks, data, and Internet) to be carried over a T-3 between multiple agribusiness locations, sometimes using more than one carrier. However, a smart T-3 connection can take expensive equipment and considerable design and planning. In many cases, telcos design smart T-3s that are somehow not smart enough to be cross-connected with competitors, forcing one-stop shopping for multiple services.
Because they rely on existing copper wire, DSL (Digital Subscriber Line) technologies may be an excellent way for rural Florida residences and small Florida agribusinesses to acquire access to high-speed hypercommunication services. The ILEC's own facilities and existing copper wires are used to deliver voice, data, and Internet service using several DSL technologies. However, the provision of DSL services is highly sensitive to the distance between the subscriber and the serving CO. In some cases, wire centers and RAVs (Remote Access Vehicles) have been implemented that bring DSL closer for customers who are farther than 18,000 feet from a CO. However, DSL faces several technical barriers based on the ILEC wire plant. These barriers include loading coils (used often on longer local loops), bridged taps, and DSL incompatible RAVs. These were discussed in 4.3.2.
Table 4-21 lists twelve varieties of DSL. The full translation of each acronym, bandwidth in Hz (where available), maximum data rate, and maximum range are also shown. For several reasons, the sources used to prepare Table 4-21 showed enormous variability concerning the capabilities of DSL varieties. First, information changes from month to month as new technologies are rolled out and from region to region depending on how local carriers deploy DSL. Second, carriers offer levels of service that include hierarchies of bandwidth for most forms of DSL. For example, BellSouth's FastAccess™ ADSL is 1.5 Mbps upstream and 256 kbps downstream, while GTE's ADSL goes from a bronze level (256 kbps/64 kbps) to a platinum level (1.5 Mbps/768 Kbps). While most DSL CPE operates at several speeds, there can be CPE incompatibilities among line codes or modulation methods. Finally, differences in DSL technologies are responsible for divergent claims regarding the superiority of standards.
Hence, before discussing the varieties of DSL shown by Table 4-21, the problem of comparing different standards with each DSL variety must be briefly addressed. For example, the ANSI and ITU ADSL standard modulation is time domain DMT (Discrete Multi-tone), while Paradyne (formerly part of AT&T) uses a frequency domain CAP (Carrierless Amplitude and Phase Modulation) scheme licensed from GlobeSpan Technologies, Inc. The debate among supporters of these two standards has been particularly contentious.
Both CAP and DMT modulate the upstream and downstream signals into frequency bands using passband modulation techniques. CAP is a DSL modulation based on QAM (Quadrature Amplitude Modulation). As shown in Figure 4-45 on the right, CAP allows the DSL signal to occupy the full bandwidth, splitting transmissions up by time in modulation, similar to TDM. DMT is a DSL technology that uses DSPs to divide the signal into 256 sub-channels of 4 kHz as shown on the left of Figure 4-45 [Schneider, 1999].
However, CAP is a de facto but not a de jure standard. Regardless of CAP's standards status, it is used by many telcos for ADSL [Schneider, 1999]. DMT (Discrete Multi Tone) is the de jure ANSI/ITU ADSL standard in competition with CAP. DWMT (Discrete Wavelet Multi-Tone) is Aware Inc.'s de facto standard designed to be superior to regular DMT. According to Aware, "DWMT is able to maintain near optimum throughput in the narrow band noise environments typical of ADSL, VDSL, and Hybrid Fiber Coax, while DMT systems may be catastrophically impaired" [Aware, 1999, p. 8]. DWMT is used for VeDSL.
Several line codes are used for specific DSL varieties. Baseband line coding schemes such as 2B1Q (Two Binary 1 Quartenary, four-level code compresses to binary bits into one time state) are used to provide IDSL and HDSL. OPTIS is used for HDSL2. Most DSL transceivers (also called DSL modems) use either CAP or DMT, while others can only be set for the standard supported by the carrier. In addition to a DSL modem, an NIU (Network Interface Unit, or splitter) is needed as an edge device to connect to the local copper access loop.
All kinds of DSL (x-DSL is used to describe DSL service of an unknown or generic type) operate in several distinct frequency ranges in order to separate upstream DSL, downstream DSL, and voice signals. Figure 4-46 shows how frequencies are used in ADSL.
The voice channel for a single telephone line operates from 0 to 4 kHz. A guardband (4 to 30 kHz) separates the voice band from the DSL upstream signal that occupies 30 to 138 kHz. Another guardband (138 to 160 kHz) separates the downstream that occupies from 160 kHz and runs to 1104-1110 kHz depending on the distance to subscriber and service level offered. The precise DSL frequencies used vary by line code, distance, and the spectral compatibility of other services (such as ISDN and HDSL T-1) that may share access distribution cable. Guardbands prevent interference between voice, upstream, and downstream DSL signals.
Just as was true for T-1s, higher DSL frequencies bring higher interference levels, especially if wires are above ground on telephone poles between the customer and CO. However, to increase the data rate in any direction, the larger slices of spectra are needed. Since DSL is a repeaterless technology (that uses existing copper access loops), it works better the shorter the distance from subscriber to CO. The same relationship between bandwidth, maximum frequency, and interference levels that governs T-1s is an important reason distance plays such an important role in DSL availability. The symmetry depends on the bandwidth of each direction's slice of spectrum, but lower frequencies are able to carry far more data than higher ones. It is for this reason the analog voice line is absent from some DSL varieties such as G.Lite, IDSL, and HDSL2.
ADSL (Asymmetric DSL) is the most frequently deployed variety of DSL by ILECs in Florida and is the first entry in Table 4-21. Indeed, there is an entire family of asymmetric DSL services including ADSL, MDSL, G.Lite (UDSL), CDSL, and RADSL as well. All five have two things in common: asymmetric bandwidth and ability to carry a single telephone line. Developed at Bellcore beginning in 1989, ADSL is asymmetric with maximum upstream rates of 640 kbps and downstream rates varying from 1.5-8 Mbps depending on distance, condition of the copper loop, and local CO capacities. ADSL (DMT) operates from 0 to 1024 kHz or 0 to 1110 kHz with voice POTS.
In 1999, an ITU de jure G.Lite (or splitterless DSL) standard was formally approved. G.Lite is like ADSL except that it does not require a visit to the customer to install a splitter that separates voice and data circuits on the customer premises. Splitterless DSL is more convenient for the carrier since a service visit to the customer location (truck roll) is not required to install a splitter (NIU). However, with a splitterless DSL, some sacrifice in speed is needed and only one telephone jack will be DSL equipped.
CDSL (Consumer DSL) is a splitterless proprietary standard developed by Rockwell that uses a proprietary transmission technology instead of DMT or CAP ADSL technology. Like G.Lite, it is splitterless (neither variety allows an analog POTS line), but CDSL has a shorter range and slower data rate than G.Lite does. By 2001, four times as many splitterless DSL circuits are expected to be deployed in the U.S. residential market than all DSL circuits with splitters. RADSL is a rate adaptive form of DSL that adjusts the data rate to subscriber distance from the CO rather than supporting a uniform rate through the entire area.
Sources: FCC, 1999, p. 21; Schlegel, 1999; Tower, 1999, p. 9-6, Paradyne, 1999, p. 24, Weinhaus, 1998, Sheldon, 1998, pp. 312-314, Aware, 1999, 2000, Orckit, 1999, p. 5, Rhythms, 1999, p. 2; Zimmerman, 1998, p. 33.
The next two version of DSL allows multiple voice lines over the same copper wire that carries DSL into the customer premises. Some DSL circuits (such as HDSL2) do not carry PSTN telephone conversations. Those DSL varieties that carry PSTN calls are usually designed for a single telephone line, but it is expected that future IDSL implementations will carry two lines. VeDSL is a new technology (developed in 1999) that allows two or more telephone lines to work in conjunction with DSL, using voice compression and other techniques. However, the symmetric HDSL family (to be covered next) will soon join the category of the multiple voice line DSL family. When that occurs, trunk line prices may fall to one-tenth of their T-1 levels and one-third of their ISDN-PRI levels. Additionally, if special voice compression CPE is installed, from 44 to 148 telephone calls may travel simultaneously over one HDSL connection, even while data traffic obtains rates from 325 to 1181 kbps as well [Henderson and Lipp, 2000, p. 13]. Currently, other DSL varieties support one analog voice line or no voice service at all.
As Table 4-21 shows, the symmetric DSLs in the HDSL family include HDSL, HDSL2, SDSL, and MSDSL. HDSL (developed by Bellcore and Bell labs in 1986) uses quad copper and the 2B1Q (4-PAM) line code that ISDN and IDSL use. HDSL can transmit symmetric T-1 signals without the need for repeaters every two to six thousand feet as with AMI, reducing costs and cross talk in the process. HDSL2 requires only a single wire pair rather than quad, allowing repeaterless T carrier signals to travel over one pair for up to 18,000 feet. HDSL2 does not offer a voice line as HDSL does, but is used as a replacement technology for AMI and B8ZS T-1 dedicated circuits. HDSL2 uses a high efficiency line code (OPTIS or 16 PAM). T-1 implementations using repeaterless HDSL need to be within 12,000 feet of the CO with 2B1Q or 18,000 feet with CAP. However, installation of a fractional HDSL T-1 (with 768 kbps speeds) is possible within 36,000 feet of a CO if doublers (special DSL repeaters) are used [Zimmerman, 1998, p. 46]. Currently, HDSL is used mainly as a separated T-1 connection for voice or Internet or data networking. However, a powerful and flexible converged version (able to combine traffic types) will shortly be available [Henderson and Lipp, 2000].
SDSL was developed before HDSL2 as a single wire pair solution with symmetric data rates. SDSL allows a single voice line while most HDSL implementations do not allow voice transmissions. SDSL allows static IP addresses (important to Internet applications), while ADSL usually uses dynamic IP addresses. MSDSL is a proprietary DSL protocol form of SDSL. It allows rates of 128 to 1024 kbps within 11,500 feet [Schneider, 1999].
IDSL is an ANSI DSL standard developed in the 1980's mainly so modem calls could be removed from switched PSTN. The idea was to prevent congestion of the telephone network by data calls (which are on average longer) so the RBOCs could avoid increasing the size number of simultaneous switched PSTN connections. IDSL does not require expensive circuit-switching gear to be deployed at the telco CO as ISDN does because IDSL does not require connection though a circuit-switched voice switch as ISDN-BRI does. IDSL can be deployed over all kinds of RAVs and DLCs, the only DSL with such flexibility.
However, in areas where ISDN is already deployed IDSL may be deployed at a very low cost regardless of the type of fiber-copper RAV that may be present. Other forms of DSL cannot be implemented over certain SLC-RAV combinations. IDSL uses the 2B1Q line code as the modulation scheme. Current versions of IDSL support data rates of around 144 kbps, similar to ISDN-BRI. IDSL may become more widely available with ISDN-PRI variants of HDSL or HDSL2 that allow a fully symmetric 1.544 Mbps ISDN-PRI connection that will allow ten to twenty voice lines [Schneider, 1999].
The last variety of DSL is VDSL. VDSL (developed in 1995) can use QAM to achieve data rates of up to 2 Mbps downstream and over 50 Mbps upstream within a few thousand feet of a CO. Because of its short range, VDSL deployments are limited to areas near COs or to those areas with hybrid copper-fiber infrastructures such as FTTC or FTTN. VDSL has by far the fastest data rates of any DSL variety with the shortest range. VDSL is not yet widely deployed.
The monthly cost of DSL ranges from one-third to one-twentieth the cost of fractional T or T-1 circuits with similar bandwidth. Often, however, as Table 4-21 shows, DSL will have asymmetric upstream and downstream rates while other dedicated circuits (such as T-1s) are symmetric. Asymmetric services (ADSL, G.Lite, IDSL, VDSL, and CDSL) are mainly intended to allow simultaneous PSTN and Internet access. Hence, DSL is often used for data (VPN, Virtual Private Networks, 4.9.7) or Internet access (4.9.1) for SOHO or small business applications.
Figure 4-47 shows how DSL is deployed [BellSouth, 1998]. An Ethernet card and Ethernet cable goes from the CPE DTE (computer) to a DSL modem and then to a splitter attached to a splitter or NIU (Network Interface Unit) at the demarcation point on the customer premises. The splitter separates telephone and data traffic at the agribusiness. In the BellSouth model, the NIU is connected to the CO via a conditioned copper loop. Upon arrival at the CO, POTS voice traffic travels through a voice switch and on to the PSTN while data traffic through a DSLAM (Digital Subscriber Line Access Multiplexor). The DSLAM combines signals from all DSL connections in an area and forwards them to an ATM switch and routes them to the Internet (via an NSP/ISP) or private WAN.
To a certain degree, DSL is an evolving technology that is still seeking a market. The ADSL family is suited to SOHO or small farm applications, especially for the Internet and voice line combination. DSL can be provided at distances of up to 25,000 feet from COs either by ILECs or by specialized DSL ISP/ALECs such as Rhythms and Covad. ILECs allow such DSL competitors to provide service in some areas of Florida because DSLAMS and line conditioning are expensive and cannot proceed simultaneously throughout Florida. The ability of alternative DSL carriers to provide service to an area depends on the ILEC's infrastructure design and co-location policies, since the competitor has to install a DSLAM at the CO to provide service.
VeDSL has great market potential, but is likely to be eclipsed by HDSL (or new IDSL PRI-style) circuits capable of serving as converged networks. At present, HDSL is mainly being sold as a technologically superior version of a T-1 carrier. HDSL2 is expected to replace data-only HDSL so that it will be more widely deployed since only one wire pair is needed. Meanwhile, advanced HDSL circuits will offer businesses the data networking benefits of T-1 dedicated circuits and the ability to use the office PBX to support enhanced voice as currently ISDN-PRI does.
Before moving on to circuit-switched digital connections in 4.7.4, three other dedicated circuits need to be mentioned. The first of these is cable modem and telephony service. However, since they use broadband signaling (and are shared with as many as 5,000 other users on a node), cable modem connections are not dedicated in the same way a T-1 or DSL is. However, data connections over cable share the lack of connection establishment delay and the high speeds of other dedicated connections, while cable prices are dramatically lower than T-1s or many kinds of DSL. Cablecos throughout Florida are rushing to deploy hybrid coax-fiber infrastructures (see 4.3.3) capable of supporting digital cable TV, two-way Internet access, and enhanced telephony. Cableco offerings are aimed primarily at the residential market in cities, towns, and suburban unincorporated areas for several reasons.
The first reason cableco dedicated circuits are aimed at the residential market is that demand for Cable TV is mainly found in the home rather than office. Second, while cable modems offer data rates that rival T-1 speeds, cable modems are shared connections so that as service becomes better utilized, speeds could decrease. Furthermore, there are security problems inherent in creating business data networks over a shared broadband pipe along with a shortage of CPE that would support more than two or three computers at a location served by cable modem service. The telephony offerings of cablecos, while price competitive (most enhanced services are far cheaper than they are when purchased from ILECs or ALECs), are hobbled by the asymmetric nature of the cable network. Most systems do not have the upstream capacity necessary to support more than one to four telephones per location.
Small agribusinesses located in small towns or suburban areas may find cable-provided voice to be a better buy than telco offerings. Over the same connection, high-speed cable Internet is a bargain when compared to other always-on dedicated connections. The cablecos are busy rolling out residential service. Once the cable industry begins to focus on business (currently a secondary market), expect to see new varieties of CPE, more truly dedicated circuits, and the ability to support many telephones over a single trunk.
SONET or synchronous optical networking can be another example of a dedicated circuit. SONET is unique among dedicated circuits because it often features self-healing rings (route diversity) so that even if a fiber is cut service is not interrupted. SONET requires a physical connection to a fiber optic network. Like a T-3 carrier, SONET is a carrier circuit able to transmit data and multiple voice trunks. The bandwidth of a fiber optic cable (up to 1 GHz) can support numerous voice circuits. The number of telephone lines varies from 1,000 voice channels on an OC-1 to over 150,000 on an OC-192, according to the OC level and the compression technology. Similarly, data rates vary by OC level. An OC-1 SONET carrier circuit is capable of carrying 51.84 Mbps, while an OC-192 can carry 9.953 Gbps. SONET is used to carry service ranging from enhanced voice telecommunications to data networking and Internet access. Currently, SONET requires a large business scale to take advantage of its speed and versatility and a business location on or near a carrier's fiber backbone. Since SONET supports data networking applications and services in addition to enhanced telecommunications services, it is covered in more detail in 4.8.4.
Wireless technologies are also capable of carrying most of the dedicated circuits mentioned here. Already, some more remote locations have T-1 or T-3 connections via microwave links. As was mentioned in 4.4, a variety of wireless technologies (such as LMDS, MMDS, DEMS, and WLL) are capable or soon will be of providing fixed dedicated circuits up to T-3 speeds.
4.7.4 Circuit-Switched Digital Connections
"Circuit-switched digital connections" refers to digital traffic that is circuit switched through the ILEC CO (or ALEC switch) rather than a point-to-point dedicated circuit. The most important circuit-switched digital circuit is ISDN-PRI used to carry 23 voice channels between a business PBX and the PSTN, though ISDN-PRI can also support switched data circuits.
Circuit-switched services have several advantages over dedicated circuits. First, firms pay only for the bandwidth they need, for the time they need it. Unlike dedicated circuits, circuit-switched services often allow bandwidth on demand, giving the customer the ability to tailor capacity to changing needs rather than fit business needs to a static capacity. However, circuit-switched circuits have connection establishment delay and the possibility of connection establishment failure, which is unheard of with dedicated circuits.
Table 4-22 lists the main circuit-switched digital services. ISDN dominates the list of circuit-switched services. Used here, IDSN has a specific definition: an end-to-end circuit-switched digital connection (with out-of-band signaling) capable of carrying 23 voice channels or data in multiple channels. ISDN uses a CSN (Circuit-Switched Network) that creates on-demand (rather than dedicated) physical circuits. When a digital ISDN telephone is taken off hook, that circuit is reserved for the user until it is hung up (if a circuit is available). ISDN is available in two forms: BRI (Basic Rate Interface) and PRI (Primary Rate Interface).
ISDN-BRI offers two 64 kbps B channels usable either as telephone lines or as Internet or data connections. ISDN-BRI had (at one time) been expected to surpass the modem in popularity, but because of slow ILECs deployment and the tendency for circuit-switched services to have per-minute charges, ISDN-BRI never became a popular method of Internet access. An NT1 is an edge device needed on the customer's premises to connect to the digital telephone line used to carry ISDN signals. Up to eight ISDN compatible devices including fax machines and digital telephones, can be connected to the BRI circuit. While the two B channels can support a maximum of two telephone calls at once, multiple calls can be put on hold using D-channel signaling (multiple call appearances). Analog telephones require special adapters to be used with either form of ISDN, though a PBX can serve as both the adapter and the NT edge device (NT2) with ISDN-PRI. To deploy ISDN, ILECs use standard twisted copper pair, but must use ISDN signaling, install special CO equipment, buy DMS-100 or 5ESS switches, and condition lines longer than 15,000 feet [Eichon, 1996].
Sources: Paradyne, 1999; Eichon, 1996; Rythms, Inc. 1999.
ISDN-PRI is the main ISDN service deployed today in Florida, featuring 23 64 kbps B (Bearer) channels and one 64 kbps D channel used for signaling. Usually, only one DCE device (such as a PBX) can be directly connected to the ISDN-PRI circuit. Hence, ISDN-PRI is often used for offices equipped with PBXs since a PBX can switch calls made over the 23 B channels onto separate telephone connections in the office. Some ISDN implementations (depending on carrier and location) allow specialized joint data-telephony use of the circuit where bandwidth can be dynamically allocated between data and telephone calls. Once a telephone call is finished that channel's bandwidth can be added to a data transmission or Internet connection. However, when an office telephone rings or a call needs to be placed, data transmission is automatically reduced so the B channel can carry the call. While ISDN-PRI can be used for WAN connections and limited packet-switched data applications, the primary business use is in enhanced telephony.
Voice use predominates because ISDN is compatible with SS7 signaling and AIN CO technologies so that it is often used for call center and CTI applications. ISDN-PRI supports the full range of enhanced telephony services so that business PBXs have system and station features such as caller ID, three-way calling, etc. without having to pay for them on multiple lines. ISDN-PRI is also used for video conferencing and Internet access. Using an inverse multiplexor, B channels can be combined, and all the bandwidth allocated to one application.
The delivery capabilities for rural Florida vary by ILEC and type of ISDN service. Because ISDN tariffs include monthly fixed charges and metered or measured tolls, ISDN can be an expensive way to obtain a mix of Internet access, voice, and data communications, especially for rural agribusinesses. However, so-called ISDN Ts (actually ISDN-PRI connections dedicated to telephony) are popular ways for medium sized businesses to get T-1 speeds without having to purchase separate dedicated circuits for long-distance and local telephone service.
Another kind of circuit-switched digital service is Switched 56, a CSD (Circuit-Switched Data) technology where a circuit-switched 56 kbps data call where a circuit is established between two users as long as desired at a constant rate. Unlike a 56 kbps modem, Switched 56 service offers a symmetric 56 kbps bandwidth for data connections between parties with compatible equipment. Switched 56's main advantage is that it is usable on demand, without paying for a dedicated line. Switched 64 circuits are also available.
4.7.5 Enhanced Mobile Telecommunications: Digital Cellular and PCS
As has been discussed in 4.4 (the wireless technologies section), cellular service serves mobile customers. Digital cellular technologies provide voice and limited data services. Other wireless mobile telephony services such as narrow and broadband PCS, SMR (iDEN) are late second and early third generation wireless technologies which support many services beyond even enhanced telephony. For this reason, enhanced mobile telephony includes various technologies that support telephone service, enhanced telephone services (such as caller ID, etc.), along with limited data and Internet communications.
Mobile telephony evolved from the early radiotelephone into analog cellular (AMPS). The genesis of the wireless telephony market can be traced to GI's returning from World War II (who were used to military walkie-talkies) and demanded the wireless radiotelephone service of the Armed Forces as they returned to civilian life. Radiotelephone is an imprecise term because many wireless devices can now be used as telephones. Historically, radiotelephones were the technology used by Police vehicles in the 1920's. Shortages of spectrum and cumbersome, expensive equipment with high power consumption were the rule until AT&T (benefiting from its wartime technological breakthroughs) launched mobile radiotelephone service in 1946 [Stone, 1997, p. 144].
Yet, almost forty years later the postwar radiotelephone market was extremely limited. According to Fortune magazine,
At the end of 1983, this service was so limited that even in New York City, only twelve subscribers at once could be engaged in conversation with a total subscribership of 730 and a waiting list of two thousand. [Colin Leinster, Fortune August 6, 1984, p. 108]
Even when a call could be placed, mobile radiophone connections had poor sound, frequent static, interference, and other noise in addition to being extremely expensive and bulky.
Cellular was proposed internally at Bell Labs in 1947, with publication of basic ideas in 1960. Out of this work came AT&T's 1971 proposal for analog cellular service or AMPS (Advanced Mobile Phone Service). In 1974, the FCC ordered that only wireline telephone companies could be cellular carriers. Because service required blocks of 20 MHz, the 40 MHz cellular radio spectrum allocation could go to a maximum of two companies per CMA (Cellular Market Area) at a maximum. From 1974 to 1979, the FCC adopted a cautious attitude granting construction licenses conservatively. Japan and Sweden moved at a faster rate than the US system whose standards were incompatible with the foreign systems. By 1981, the FCC allowed one wireline ILEC and one non-wireline carrier determined by "competitive" application per market [Stone, 1997, p. 147].
The AT&T breakup led to the issue of whether RBOCs or AT&T got custody of the cellular market. The strict regulatory distinction between local and non-local services meant that, instead of AT&T, only RBOCs (defined as local) were awarded "wireline" cellular franchises. However, RBOC boundaries followed state lines, while cellular service followed the curved contours of radio coverage areas and FCC-defined CMAs. Non-wireline franchises engaged in lengthy rent-seeking lobbying in order to be declared the "competitive" applicant.
McCaw (a Florida company) developed a mobile digital cellular technology separate from AT&T's AMPS. D-AMPS (Digital AMPS) was brought into service in 1992 to relieve existing cellular network congestion. McCaw devised a Cellular One system where national coverage could be handled by one carrier, eliminating expensive roaming charges and differences in service variability and availability. Sweden's Ericsson developed technology to install D-AMPS onto an existing AMPS network, without requiring any additional radio frequencies. McCaw rolled-out the new digital service in south Florida. Digital technologies allowed less distorted calls and value-added services such as paging through the same customer equipment. McCaw also pioneered the CDPD (Cellular Digital Packet Data) system where speech and data compression technologies allowed data packets to be transmitted simultaneously with voice conversations [Stone, 1997]. AT&T ended up purchasing McCaw Cellular in a 1994 deal worth $11.5 billion to create the first nationwide digital cellular carrier.
Nationwide, the 100,000 cellular subscribers in 1984 boomed to 3.5 million by 1989 to reach 5.3 million in 1990 and 23.2 million in 1994. In 1995, the expectation for the year 2000 was 46.9 million cellular customers [Stone, 1997, p. 145]. However, by 1998, over 69 million mobile telephony subscribers produced over $30 billion in service revenues nationwide [FCC 99-136, 1999, p. B-2]. At the end of 1998, 49.2 million subscribers were analog cellular customers. There were some 20 million digital wireless subscribers using four technologies: GSM (2.7 million, all broadband PCS), TDMA (8 million, broadband PCS and digital cellular), CDMA (6.4 million, broadband PCS and digital cellular), and iDEN (2.9 million, digital SMR) [FCC 99-136, 1999, p. B-10].
PCS is an acronym that stands for Personal Communications System. PCS was discussed as a wireless mobile technology in 4.4. However, the PCS market is harder to define than the PCS technology. According to the NTIA, "PCS represents a considerable technological improvement over early analog cellular services, incorporating digital voice compression, complex CDMA or TDMA access protocols, and a fast-growing variety of advanced services" [Vanderau, Matheson, and Haakinson, p. 19, 1998].
Unfortunately, multiple definitions have sprung up for the term PCS. The concept often becomes broad as with attorney Thomas A. Monheim's conceptualization of PCS:
Imagine having one, permanent telephone number and a small, wireless telephone that you could carry and use everywhere--at home, at the office, or in the car. Imagine having a laptop computer that had built-in radio functions and was connected to a wireless local network. Imagine having a sensor in your car that was part of an intelligent vehicle highway system that kept you appraised of traffic conditions and ideal commuting routes. These are just a few of the scenarios that may become everyday reality with the development of advanced PCS. [Monheim, 1992, p. 336]
Other authors say PCS is distinct from other services because PCS uses frequency reuse technology in the 1900 MHz frequency bands, offered by FCC "common carriers" with spectrum obtained at FCC auctions. PCS telephones may allow one and two-way paging, messaging, and even Internet access in addition to voice calls. Fax and video delivery are possible, as is some level of location monitoring (911 capability). Harte et al. (1997) give three definitions of PCS:
1) Any type of wireless technology, 2) A wireless system operating on the North American 1.9 GHz band, in distinction with a system operating on the 800 MHz 'cellular' band, 3) PCS-1900 technology in distinction to other access technologies. [Harte et al., 1997, p. 404]
It may be easier to use Stone's argument urging the use of operational conceptualizations "to which all interested parties subscribe, rather than definitions" [Stone, 1997, p. 154]. PCS does not beat cellular's range of about twenty miles (with 1997 technology) from transmitters to base station, operating better in open spaces. PCS operates at higher frequencies, indoors, outdoors, in tunnels, behind mountains, but has a one thousand-foot range from base station to wireless set in 1997 according to Stone [Stone, 1997]. By the year 2000, Lucent Technologies Flexent CDMA technology was able to extend CDMA base station ranges up to 70 miles, while GSM-based systems are able to reach from 20 to 40 miles.
In 1990, the FCC allocated 220 MHz of spectrum in the 1850 to 2200 MHz band to broadband PCS. Additional allocations were made for narrowband PCS in the 900 MHz band [Stone, 1997]. The U.S. government auctioned additional spectra, receiving over $1 billion in a two-stage auction process administered by the FCC. To provide PCS, expensive infrastructure costs in the tens of billions of dollars nationwide are involved, along with environmental and aesthetic issues surrounding tower construction.
PCS technology and standards are quickly developing around TDMA (E-TDMA), CDMA (W-CDMA and CDMA-2000), GSM, and SMR technologies. These four types correspond to the third generation of wireless technologies (see Figure 4-29 in 4.4) and are associated with PCS, but do not define it. Each standard was a de facto proprietary standard sponsored by a different group of carrier and manufacturers. For example, W-CDMA (5 MHz channel) is sponsored by GSM operators, while CDMA-2000 (5 MHz channel) is sponsored by IS-95 operators. The UWC-136 standard (200 kHz or 1.6 MHz channels) is TDMA or E-TDMA based.
Common wireless mobile devices capable of supporting some form of PCS are shown in Table 4-23. Note that in the space of one year, pricing has fallen from one-half to one-tenth of the 1999 levels as carriers vie to capture an installed base, equipment becomes cheaper to make, and infrastructure costs fall.
Adapted from Zaatari, 1999, p. 136, 2000 prices from Alta Vista Shopping & Letstalk.com.
Another technological standard associated with PCS (shown in Table 4-23) market is SMR (Specialized Mobile Radio). SMR devices are unique, because they are both wireless telephones and private radios able to communicate with other radios in an agribusiness over a secure network. When a private firm in Texas called Carterfone sold radios it manufactured that could also be interconnected to the PSTN, AT&T sued to prevent such interconnections. Carterfone prevailed and that was the beginning of SMR service, where each device could communicate with other radios in a business or with any PSTN telephone. Nextel captured all the SMR (Specialized Mobile Radio) frequencies by buying up the so-called dispatch frequencies and then buying 2,500 frequencies from Motorola in a stock swap. Nextel also took over SMR frequencies through acquisitions and bankruptcies of other firms [FCC, 99-136, 1999]. SMR is neither cellular nor PCS, but it is a mobile wireless telephony technology with a market presence in Florida.
Most wireless telephony devices are sold with service plans so that manufacturers recover costs from carriers (who are paid on a recurring basis rather than entirely up front. The third generation mobile devices shown in Table 4-23 also offer features such as e-mail, fax, messaging, and Internet access. While displays are limited to small screens and web-standards such as WAP and AP are limited, text e-mails, market information such as futures quotes, and limited shopping are available to agribusiness users. Additionally, service areas for two-way e-mail and Internet access are sometimes smaller than for telephony.
4.7.6 Paging and Wireless Messaging
Developed in 1949 by Charles Neergard, pagers are small devices that allow one-way communication or limited two-way communication of messages. Pagers have built in address codes (cap codes) that enable them to receive signals from a ground or satellite radio transmitter. A message is received by a particular pager when the pager's specific cap code is received through broadcast of a radio signal.
The pager market has seen six main types of pagers. The first type is the cap code pager (beeper). These units simply beep when the built-in address is picked up by the pager's radio receiver once the pager's number has been dialed by telephone. The recipient would have to know (in advance) who had called and where to return the call, because cap code pagers could not convey that information. Because all they could do was beep, cap code pagers resulted in the term beeper being associated with pagers. Tone voice pagers, the second type, originated in the United States in the 1970's. Tone voice pagers permit the sender to transmit a brief voice message to the pager-receiver. To page someone, a person dials a local, long distance, or toll-free number (using a touch tone telephone) and speaks a short message.
Digital display numeric pagers (the third type) were introduced in the early 1980's and are still the most commonly used type. These units can display numeric messages of up to twenty numerals such as the caller's telephone number. Paging is accomplished by calling the pager telephone number and then keying in DTMF (Touch-Tone) numerals. Simple alphanumeric pagers, the fourth type, allow numeric messages and extremely limited text messages to be shown on the pager's display. By the early 1990's, paging vendors had added features such as voice mail messaging so that numeric and alphanumeric units would signal when a voice message was left in a subscriber's voice mailbox. The subscriber could then dial the paging vendor's service line (or their pager number), enter a security code, and hear a short voice message from the caller.
By the mid-to-late 1990's, demand for more sophisticated pagers and paging services was experienced. The pager market became the "wireless messaging" market to emphasize capabilities of new pagers and the enhanced services. The fifth and sixth kinds of pagers are advanced one-way word message pagers and advanced two-way word message pagers. Advanced one-way word messaging pagers range from units that display a single line of text and store a few hundred characters of messages to units that have a four line display and 30,000-70,000 character memory capacities. Text messages may be sent from e-mail programs and special web pages.
Motorola's PageWriter 2000™ and the Motorola TimePort are examples of advanced two-way pagers, though the word pager has been replaced by "portable message center". The Page Writer 2000 has its own operating system, infrared PC link and PC communication software, QWERTY keyboard, and addresses and telephone number storage. The Page Writer features up to 4.5MB of flash memory and 256kB of RAM. Messages can be sent to other two-way pagers, one-way pagers, e-mail addresses, and fax machines. Messages can be received from other two-way pagers, from e-mail programs, and through special web pages. However, the two-way coverage area within which messages can be sent and received is smaller than the one-way area where messages can only be received. Add-on packages allow the pager to be used to display pictures, to play games, and to share software files.
Additional features that may be available on one and two-way word message pagers include delivery assurance, information services (such as stock quotes, news, and weather), out-of-range indication, alarms, voice mail through the unit, multiple colors, message waiting indicators, etc. Some paging vendors are working on the ability to send text pages using ordinary touch tone telephone tones or to use speech-to-text conversion technologies to change voice messages into text.
Advanced two-way pagers may be enabled by several technologies including CVP (Cellular Voice Paging), narrowband PCS, and broadband PCS. Two-way voice paging with compression provides non real-time transmission of voice messages. "The difference between a standard wireless telephone and a two-way voice pager is that the voice pager stores messages in its internal memory" [Harte, Prokup, and Levine, 1996, p. 382].
Paging can be customer-owned or carrier-subscriber. Customer-owned paging can cover a building, farm, or entire state. Agribusinesses that operate a private paging system must be FCC licensed to operate as LMRS (Land Mobile Radio Services) operators if coverage exceeds localized on-site or in-building coverage. All types of paging service can be provided in a customer-owned system.
Pricing ranges from month-to-month pager rental and service to the fixed cost of purchasing the pager and a monthly service charge. The size of the service area (local, statewide, regional, or national) and the pager's telephone number (local, FX, or regional or national toll-free) are other factors in pricing. Most alphanumeric pagers are priced on a per month basis, with annual discounts. Depending on the paging company, there is a maximum number of messages available for a monthly charge, with a fee per message if the maximum is exceeded. Advanced two-way pagers require special coverage areas for the two-way feature to work. Most pager coverage areas can be variable in rural areas, depending on terrain, vegetation, and sometimes weather or solar conditions. Some pagers also have sky-to-ground, marine-to-shore, and satellite (international) coverage.
Device-device convergence is blurring the line between wireless messaging, hand-held palm PCs, wearable computers, cellular telephones, and PCS telephones. The new generation of pagers can prove very handy to agribusiness employees who are often away from telephones, etc. While digital telephone prices are dropping, the pricing for advanced portable two-way message centers (pagers) may be more attractive per text bit transmitted. Two-way pager rates are often more attractive because since they are more likely to be fixed recurring monthly charges instead of a combination of recurring charges and usage-based tolls that predominate in the mobile telephony market.
4.8 The Private Data Networking Market
Private data networking has become a hypercommunication sub-market after arising as an offshoot from the circuit-switched voice network (PSTN). In addition to using dedicated point-to-point circuits such as T-1s (4.7.3) and circuit-switched connections such as ISDN (4.7.4), data networking now relies on packet and cell-switched connections and value-added services that go beyond a physical OSI layer connection. This section highlights several services and technologies that are used for private data networking. Many of these technologies can carry all kinds of hypercommunication traffic from computer data to voice, video, fax, and PSTN. The emphasis is on private computer networks, but many of the technologies mentioned here are fully adaptable to converged hypercommunication use.
Figure 4-48 shows how private data networking fits with the material covered thus far and with the Internet (to be covered in the next section). Together, these seemingly separate elements are converging into hypercommunications. The fact that the elements do not fit neatly into any classification should not cause the reader to worry. Instead, it must be realized that the artificial technical and regulatory separations will be swept away with convergence. However, many features of data communications differ from the voice-orientation of POTS and enhanced telephony. Once private data networking is added to the voice perspective (and both are put together with the Internet), the reader should have a good idea of what hypercommunication services and technologies are. Everything inside the dotted line will be covered in 4.8.
From the private data networking perspective shown in Figure 4-48, there are several levels leading up from the transmission technologies mentioned in 4.3 and 4.4. The first three correspond with the QOS reference levels presented in 4.2.3. First in the physical equipment level (or QOS local level at the agribusiness), connections of CPE to access level services are specialized in data communications, though enhanced telecommunications CPE may still be used also. Second, on the access services level, the only new section (SONET networking) was touched on already in 4.7.3 as a dedicated digital circuit. For this reason, those two arrows are within the dotted line in Figure 4-48, but some circuit-switched connections are used in data networking as well. Wireless technologies and access methods can be useful in private data networking, but since they fit in with the cell-switched, packet-switched, or PSTN categories of transport level services, they get brief coverage in 4.8.5.
The third and fourth levels in Figure 4-48 can be mapped to the OSI reference model with its seven layers of communication between two users of a network. However, application level services rely on every QOS level. Value-added application level services in Figure 4-48 may include the application, presentation, and session layers of OSI, while transport level services tend to include the OSI transport and network layers. Up until now Chapter 4 has focused on physical layer connections and lower data link layer links. The rest of Chapter 4 deals with more complicated networking issues as represented by higher OSI layers. Table 4-24 shows the sub-sections of 4.8.
Two tasks must be performed to illustrate the foundation of private data networking further before the topics in Table 4-24 can be intelligently presented. First, an evolutionary chain from the circuit-switched and point-to-point dedicated circuits (leased lines) to the internetworked world of packet and cell-switched virtual circuits has to be developed. Second, the relationship between physical layer connections (as discussed in 4.7) and higher layer technologies (covered in 4.8 and during 4.9) must be made apparent through three levels of the OSI reference model. These tasks can be done together by considering interactions among markets, generations, and OSI layers.
The evolutionary chain roughly follows the six generations of computer networking outlined in Figure 3-6, discussed in section 3.5. Three important interactions are at work.
The first interaction, (shown in Figure 4-49) is among the network generations, their associated technologies, and three markets (advanced telecommunications, private data networking, and the Internet). Data communication needs for the first three generations (time-sharing, centralized, and early LAN peer-to-peer and client-server networks) were mainly provided by switched analog and dial-up modem connections that used the traditional telephony market. By the mid 1980's as peer-peer LANs and client-server LANs began to require part-time circuit-switched connections to remote facilities, switched analog services and dial-up modems were the only technologies required.
Proceeding up the generational life cycle to later WAN & LAN client-server networks, the enhanced telecommunications market provided physical connections such as ISDN-BRI, DDS, and Switched 56 leased lines. Slow, packet-switched technologies such as X.25 (that could work under noisy analog lines) were early private data networking services used at this time. As LANs became more complex and more likely to connect and form WANs, slow-speed digital enhanced telecommunications services such as circuit-switched ISDN-BRI and dedicated DS-0 became popular.
Another packet-switched data networking market offering was SMDS, also accompanied by physical layer T-1, T-3, and ISDN-PRI enhanced telecommunications connections. SMDS was created by the RBOCs to provide packet switched data networking for WANs and early distributed networks. These four technologies are associated with later WANs and early distributed networks. The advent of distributed networks required still more complexity and speed so that fast enhanced telecommunications services such as dedicated T-1s and circuit-switched ISDN-PRI became more popular.
Later distributed data networks became increasingly sophisticated requiring the flexibility offered by enhanced telecommunications smart Ts and new genres of pure data networking packet-switched networks such as frame relay. By the mid 1990's, cheaper value added packet-switched offerings such as frame relay and Intranets began to replace expensive telco leased dedicated lines and point-to-point enhanced technologies.
The internetworking generation has seen the establishment of SONET, ATM, and Internet VPN technologies to take advantage of the lower costs of private cell-switched and public IP-based networks. With the exception of DSL, instead of serving as access methods alone, later private data networking and Internet market technologies were offered with many intermediate OSI layers that go beyond mere connectivity into the realm of value-added services.
The inter-network generation required the introduction of converged networking technologies such as SONET and ATM. The introduction of DSL allowed an inexpensive alternative to expensive dedicated enhanced telecommunication connections to be available. The capstone of the inter-networking generation are VPNs (Virtual Private Networks) that use Internet access and TCP/IP to create private network tunnels over the Internet, covered in more detail in 4.9.7.
It can be especially difficult to separate services from technologies in today's private data networks. Both data and voice travel over WANs, Intranets, and Extranets that use technology combinations such as frame relay over copper, ATM over fiber optics, and T-1 carrier over wireless. The private data networking market relies on physical layer enhanced telecommunications technologies to access a family of value-added services and technologies. Therefore, agribusinesses gain greater hypercommunications flexibility, lower costs, and less responsibility for day-to-day management of data networking.
As each generation progressed, new kinds of connections, DTE, and DCE became needed as data communications spanned ever-larger distances. The separation of data networking from the PSTN-based POTS and enhanced telecommunications was spawned in two ways. First, as local computer networks began to require connection to regional and international networks the new WAN applications needed specialized data switches and other carrier DCE. However, high telco prices for the necessary dedicated and circuit-switched leased lines left many businesses unable to afford enhanced telecommunications solutions to their data communications and networking needs. Second, as computer networks became composed of DTE and DCE from many manufacturers and diverse software platforms, interconnection rather than telco standardization were demanded by the data communications market. The complexities of interconnecting different kinds of business networks were often made worse by the technical requirements telcos placed on enhanced telecommunication leased lines.
The second interaction, between the six economic generations of computer networks and the seven levels of the OSI model underscores the differences between enhanced telecommunications, private data networking, and Internet-based networks. As generations unfold, middle (and upper) level carrier services grow in importance as illustrated by Table 4-25.
While Table 4-25 represents an enormous simplification, some generalizations can be made. First, data networking services and technologies differ from enhanced telecommunications offerings because middle OSI-level functions were not economically and technically necessary until the advent of distributed WANs, inter-networking, and the Internet. Second, faster speeds and better carrier transport meant that as generations progressed, addressing, hierarchical networking levels, and other concerns required middle OSI-level protocols to provide higher levels of service.
A third and final interaction is between the OSI layers and the services themselves. This interaction is more complicated than Figure 4-50 shows, but is at the very core of the reason that enhanced telecommunications services (though they can be used for data networking) were covered in conjunction with voice communication. As Figure 4-49 related the economic generations of computer networks to data networking technologies, so Figure 4-50 relates the economic generations to the technical OSI layers. One way to begin to understand this is by comparing a T-1 point-to-point dedicated connection to ATM or frame relay.
Figure 4-50 shows the differences in data networking services and technologies within the context of the OSI model. First, notice that the enhanced telecommunication circuits mentioned in 4.7 are shown in the physical layer at the bottom of Figure 4-50 since their role in private data networking is below higher-layer services. At the second OSI layer (the data link layer) packet-switched networking services and technologies such as frame relay are shown. While packet switching relies on physical layer connections for transmission, higher level services (such as ATM) and protocols for error checking and other purposes have become even more necessary as networks get more complex.
Most recently, the private data networking sub-market has become increasingly based on the Internet. Traditional LANs and WANs still make up the bulk of data networks, but thin clients, Intranets, and Extranets are becoming increasingly popular [Koehler et al., 1998]. Some such networks are "power" networks that add voice to the equation, as well as facsimile, paging, and secure Internet access through a firewall.
Both Microsoft and the US government (GSA, FED-STD-1037C, p. I-15) make a distinction between an internet and the Internet. Small i internet is given the familiar definition of a "network of networks" which includes but is not limited to the Internet (capitalized). An internet, under this view, includes any network of networks --from two interconnected LANs in a small business to the Pentagon's worldwide WAN or AOL's content network. Networks, covered in detail in Chapter 3, are "a series of points or nodes interconnected by communications paths. Networks can interconnect with other networks and contain subnetworks." [http://www.whatis.com/network.htm, December 1997]
Intranets are corporate networks that use the TCP/IP protocol stack to operate. Extranets are like Intranets in that they have a corporate scope, but Extranets are private TCP/IP networks that connect a firm to suppliers, major customers (such as wholesalers), and to investors and other stakeholders. Such networks (discussed in 4.9) typically have "closed" rather than "open" architectures.
4.8.1 Networking Equipment
At first glance, private networking seems to be an environment occupied by computers, data transmission, and internal e-mails. In fact, by using private networks, far-flung agribusinesses may actually benefit more than their city cousins because of the cost savings and increased communications of seamlessly merging voice, data, and Internet traffic into one unified whole. That whole, in turn, is able to do everything the telephone call and fax of old did for a small fraction of the cost. Additionally data networking applications support new features such as real-time video, voice and video conferencing, GIS monitoring, real-time weather, emergency services, etc.
Every kind of device needed for private data networking cannot possibly be described here. However, Table 4-26 sketches several important examples that are followed up by the examples of Figures 4-51 and 4-52.
The terminology of private data networking equipment is seldom as clear as Table 4-26 suggests since manufacturers, carriers, and network protocols may define the same device using different terms. WAN switches, routers, and gateways are the only entries not covered previously. Routers are used to interconnect similar parts of a network. Routers may be used in LANs, WANs, and always in IP-based networks such as the Internet. Each router routes packets from one location to another using special routing tables that recognize where a packet needs to go based upon specific network addresses expressed in quad notation such as 10.245.15.9. WAN switches can be physical devices or layer 3 switches (software protocols) that offer high-speed routing from one location to another as part of a WAN. Gateways are specialized server-switch combinations that translate data communications between two or more diverse systems.
One way to explore the revolutionary new market in networking services and technologies available to Florida agribusiness is by looking at the State of Florida's private data network. Within this system are found many of the typical problems and opportunities faced by private data networks, along with critical security concerns and life and death reliability requirements for law enforcement, emergency, and medical communication.
Florida's RTS (Router Transport Service) network is a state Division of Communications network that offers multi-protocol, routed data communications. It is the State of Florida's private data network. The Division of Communications contracted with BellSouth, GTE, and Sprint to run the network end-to-end (see support and managed services in 4.8.6). This outsourcing of the management of the WAN frees state agencies to avoid worrying about maintaining the data networking equipment that connects the local network of a particular agency to the statewide WAN that carries traffic for all state agencies.
Figure 4-51 shows the nodes in the state's RTS private data network [State of Florida, 1995]. Each node has a collection of sub-networks that serve a specific geographic area. Node locations have been chosen because of their definition as LATA nodes in the PSTN.
It might seem as though the state's enormous network would not be an example that would aid in understanding private data networks for agribusinesses. However, now, even small companies can benefit from the savings and network designs once only affordable for an enormous organization such as the State of Florida. Before the 1986 Florida High Technology and Industry Council report [FHTIC, 1986], state agencies relied on un-integrated data and voice networks. By the early to mid 1990's, migration into an integrated data network was progressing, but voice transport lagged behind. Certainly, the state has the need for a more complex network than any single agribusiness, but one cornerstone of technological changes in hypercommunications is that they are scalable. Small organizations end up being able to afford solutions previously available to large organizations as technology changes rapidly.
Details of the Pensacola node of the state's RTS WAN show routers, hubs, and frame relay devices that might be typical data networking equipment in a business network. Several features are noteworthy about the Pensacola example as shown in Figure 4-52 [State of Florida, 1995]. First, the Pensacola hub has a dedicated point-to-point connection with Tallahassee (the headquarters location). The Pensacola CPE would include a DSU/CSU as an edge device for the T-1 (dedicated leased circuit) to Tallahassee. Some traffic from connections to the Pensacola hub to destinations outside of Pensacola would travel through that T-1 to Tallahassee and (if continuing beyond Tallahassee) to other points as shown in Figure 4-52.
However, frame relay traffic would go to the Pensacola frame relay cloud on the left of Figure 4-52 and then be switched at a higher OSI layer to destinations elsewhere in the state. The Pensacola hub is not only a node on the state WAN, it is a gateway, a WAN switch, and the hub of a MAN (Metropolitan Area Network) serving users in and near Pensacola.
Traffic within the Pensacola area is handled by routers such as DEF5974 which is the edge device for a circuit that goes to the Walton County HRS office. That connection is also a dedicated connection, in this case a 64 kbps DDS circuit. Other routers are local to the state office where the Pensacola hub is located and simply switch LAN traffic from one part of the building to another.
The difference between the T-1 and DDS dedicated connections (which require leasing of expensive point-to-point links between Pensacola and other locations) and the frame relay connection (which uses virtual circuits) is important to understanding private data networking. The point-to-point connections rely on simple physical layer connections, while frame relay is an example of a data link layer networking protocol.
Data link protocols provide four essential services (framing, error detection, retransmission, and media access) for private data networks. Table 4-27 summarizes several current and historical data link protocols that control physical layer data communications.
Most of the private data networking technologies and services to be covered are based on these variations. In the second column, next to the name of each protocol is the size of the packet, frame, or cell that data is transmitted in. The next column mentions the error detection scheme, with CRC (Cyclic Redundancy Checksum). The retransmission mechanism shown in the next column refers to how the network controls the information flow so that the buffer of the receiving device does not overflow [Sheldon, 1999].
Media access (also known as Medium Access Control) are rules that govern when devices that share a resource can transmit so as to avoid collisions (interruptions) that would interfere with communication. Note that the last three types are all routed which is the hallmark of modern data networking. Finally, efficiency refers to the percent of each transmission that is available for user information instead of network headers that act as envelopes or addresses.
Source: FitzGerald and Dennis, 1999.
As the rest of 4.8 will show, as networks increase in complexity network layer protocols and services are needed to encapsulate frames into packets or cells. Packets and cells are routed to addresses that may be several layers removed from a simple LAN or a simple physical layer point-to-point connection. These differences are why private data networking and enhanced telecommunications have become two different markets.
4.8.2 Packet-Switched Services
See also Chapter 3.
Packet switching differs from circuit switching because instead of a switched dedicated open circuit from one point to another communication is digitized and divided into discrete chunks called packets. With circuit switching, a particular data file, voice conversation, or other message travels as a unit over a precise route in a network. However, with packet switching, messages are divided into multiple packets. The entire network is less expensive to operate and has more route flexibility since individual packets need not travel the same exact route from sender to receiver, nor are packets necessarily received sequentially. Hence, packet switching requires that the message be reassembled in order at the receiving end before it reaches the destination DTE.
There are enormous advantages in network efficiency for packet-switched networks compared to circuit-switched networks. All three core network engineering problems (combinatorial, probabilistic, and variational) can be more efficiently solved in packet-switched networks than in circuit-switched networks. Under circuit switching, once a circuit is in use, the circuit is reserved by the two computers or telephones involved. Under packet switching, a greater volume of traffic may be packet into shared virtual circuits.
In a congested network, some packets may arrive late or fail to arrive at all. Each packet switched networking technology handles late or missing packets differently. The size of a packet varies from a few bytes to over 1000. Each packet carries address information in its header, which acts like a postal envelope for carrying information. The proportion of each packet taken up by the header is called overhead.
The Internet is a packet-switched network, but details of the Internet are found in 4.9 because the subject now concentrates on the building blocks of packet-switched private data networks. Table 4-28 shows several examples of other packet-switched network services and technologies. Packet-switched networks require a carrier technology on the physical layer to carry packets since they are not physical connections per se. Hence, packet-switched network services and technologies sit on top of dedicated circuits such as T-1 carriers between the demarcation point at the agribusiness and the service provider's POP. After that point, packets are split up to travel over the frame relay cloud (or other packet network) until they emerge from the distant POP to travel over the remote local loop to reach the destination.
Packet-switched networks come are based on a variety of service primitives (mentioned in 3.4.2). For example, packet-switched services may be based on connectionless datagram or connection-oriented virtual circuits. Connection-oriented service primitives require connection establishment delay to set up the virtual circuit while there is no establishment delay for connectionless services. Each service is classified as reliable or unreliable based on whether the service itself performs error checking or allows a higher OSI layer to perform that function. Reliability in this sense is different from QOS reliability.
X.25 is a de jure CCITT protocol that lets computers on different networks communicate through intermediate DCE over the OSI network layer. X.25 was designed to operate above the OSI data link and physical layers and is especially useful over noisy analog copper lines. X.25 is an older, but still popular service, that uses a reliable service primitive since it provides error checking at every hop end-to-end [Tower, 1999]. X.25 is often criticized because it has high propagation delay, poor interactive capabilities, slow LAN file transfer, and difficult implementation [CyberGate, 1999].
Frame relay (called a fast packet technology since it offers substantially higher data rates than X.25) operates at the data link layer of the OSI reference model. Frame relay can be considered both a service and a technology. As a technology, frame relay originated from X.25 and ISDN standards. Frame relay is "at its core . . . a simple Layer 2 protocol" [Wavetek, Wandel, Goltermann, 1999]. However, as a more complicated service, frame relay can be a layer two point-to-point connection using a level two link layer protocol that is similar to the ISDN D channel's protocol. Like X.25, frame relay also operates with layer three protocols (such as TCP/IP) where network calls are packet-switched over many links of a network. Since frame relay relies on variable-length packets, it is more suited to data and image traffic.
Frame relay is a private networking technology since even though information is sent via a frame relay common carrier (ALEC, IXC, etc.), an agribusiness' frame relay circuits are themselves shared only among facilities belonging to the agribusiness. Frames leave the agribusiness premises and travel over the local copper loop from the edge device (such as a CSU/DSU) to the carrier's access device at the POP. From the FRAD at the POP, multiple paths may be taken over the carrier's transport network to the destination. Together, these multiple paths make up what is often called the frame relay cloud.
Two kinds of circuits are used in frame relay, the PVC (Permanent Virtual Circuit) and the SVC (Switched Virtual Circuit). A PVC is created by the hypercommunication carrier and has the appearance of an always-on dedicated connection. A SVC is a temporary connection set up by a user as necessary. PVCs are logical connections that can share a single physical connection. Hence, an agribusiness can have many PVCs sharing one physical layer carrier. For example, six T-1 connections from HQ to six branch offices can be replaced by thirty-six inter office PVCs (or more) using the same wires as the T-1s, but at costs of as much as 60% less.
Frame relay circuits are defined and priced in several ways depending on how service configuration parameters are defined and enforced. Five parameters may be used. The first of these is CIR (Committed Information Rate). The CIR (expressed in bps) is the average throughput rate guaranteed by the carrier per PVC. Typically, CIR equals the bit rate (operational speed of the frame relay CPE edge device) multiplied by the average rate of usage in percent [ACC Corporation, 1998]. There is also a committed burst (Bc) and an excess burst (Be) for frame relay circuits. The Bcis the maximum number of bits (not rate in bps) that a user can place on the circuit during a particular time period, Tc. The Beis the amount of data put on the network that will be transmitted if capacity is available. Availability depends on how many other PVCs are transmitting and the relationship between CIR and the physical link speed or access rate.
Figure 4-53 shows how these measures are associated. Pricing depends on the ratio of the CIR to the access rate and Bc, Be, and Tc. The diagonal line shows the rate at which bits enter (are transmitted into the circuit). In the example, bits are entering the circuit at 40 kbps, Bcis 16 kbits, Be is 16 kbits, and Tc is 16kb/8kbps or 2 seconds. Once agribusiness CPE (such as a DSU/CSU) begins to send more than 16 kbps onto the line (even though the CIR is 8 kbps), there is a risk that data may be discarded before it reaches the FRAD (Frame Relay Access Device) at the carrier's POP. Once the outgoing rate exceeds 32 kbps, the carrier is likely to discard excess data.
Carriers discard policies vary. Some rigidly enforce DE rules, but often agribusinesses will be able to operate at or above the access rate regularly. If packets are discarded, frame relay itself does not notify sender or receiver that happened. Instead, upper layer protocols such as TCP/IP will be able to replace missing frames.
Some service providers may offer 0 CIR on a 1.544 Mbps (T-1 access rate) circuit. Unless the agribusiness understands the other configuration parameters, it might be tempted to think it is getting a dedicated frame relay T-1. A better description of frame relay might be "Bc bits within Tc seconds not exceeding a limit of Be bits of burst above Bc within Tc" [Wavetek, Wandel, Goltermann, 1999, p. 5]. While CIR and the access rate are used most often in discussing pricing, the carrier's discard policy and the other parameters may be more important to actual data rates.
Frame relay offers superior performance to X.25 with lower propagation delay, removal of node-to-node error correction, suitability for interactive use, superior LAN file transfer capabilities, and ease of implementation [CyberGate, 1999]. Frame relay is a popular replacement for point-to-point dedicated circuits because instead of requiring separate physical connections among all WAN nodes, a single physical connection to the frame relay cloud is needed. The result is dramatic savings, estimated at from "40% to 60% over comparable leased line services . . . along with lower costs of equipment and local access" [Paradyne, 1999, p. 3].
Klessig and Tesick (1995) define SMDS (Switched Multi-megabit Data Service) by the terms in the acronym:
Switched: SMDS provides the capabilities for communications between (any) subscribers just like the telephone network. In fact, SMDS even uses telephone numbers to identify subscribers (or at least their data communications equipment).
Multi-megabit: SMDS is intended for the interconnection of LANs and therefore provides bandwidth similar to LANs. The multi-megabit nature of SMDS makes it the first broadband (greater than 2.048 Mbps) public carrier service to be deployed.
Data: SMDS is intended for carrying traffic found on today's LANs. This is generally called data but in fact includes other types of traffic, e.g. images. . . .
Service: When it comes to public carrier data services, there is much confusion between technology and service. SMDS is a service. It is not a technology or a protocol. It will follow the time-honored tradition of public carrier services; the features will stay constant while the technology used in the carrier network is repeatedly improved. For example, SMDS is the first switched service based on ATM technology. [Klessig and Tesick, 1995, p. 1]
SMDS is an unreliable datagram service since it does not check for errors at each node in the data link layer as X.25 does, instead relying on higher level protocols for end-to-end error checking [Tower, 1999]. SMDS is a flexible service since it is compatible with many networking technologies including Novell, Microsoft networking products, and TCP/IP. Furthermore, varieties of DCE are compatible with SMDS such as DSU/CSUs, routers, bridges, and gateways. Frame relay and ATM technologies can be used in conjunction with SMDS.
Since SMDS is deployed primarily by RBOCs, it is an expensive service that is not as popular with businesses as frame relay is and ATM is expected to become. However, unlike frame relay, SMDS speeds are guaranteed at the full purchased rate, reliability is likely to be superior over well-engineered networks, and there are no discarded packets. However, the availability of SMDS is limited [Intermedia, 1999].
4.8.3 Cell-Switched Networks: ATM Technology
ATM stands for Asynchronous Transfer Mode, a fast packet cell-switching technology that can handle both voice and data. ATM is called asynchronous because "information streams can be sent independently without a common clock [Saunders, 1996, p. 161]. ATM uses PVCs and has other characteristics of packet switching but ATM's delay characteristics are more like circuit switching [Kumar, 1995]. For that reason, ATM is seen as the network technology that will enable hypercommunications convergence [Adams, 1997].
Cell-switching is similar to packet switching in that packets are used, but ATM uses fixed packets that are 53 bytes in length (cells). ATM has no data link layer error checking, instead using TCP/IP or other network layer protocols to error proof transmissions.
ATM is used to support the transport level for many telephone and data carrier backbones. ATM is also used over dedicated T-1, T-3, and SONET private data networking circuits. While ATM can be used for LANs, its main use is for WANs and even larger backbone networks. Data rates range from 45-52 Mbps to 155 Mbps using SONET or T-3 carriers as physical layers [FitzGerald and Dennis, 1999]. The higher speeds are a result of the small packets (cells), absence of layer two error checking, and ATM's use of fiber optic cables for the physical layer.
Figure 4-54 [Florida Department of Management Services, 1995] shows a large-scale ATM network, the SUNCOM network operated by the State of Florida. The SUNCOM network is used to support a variety of voice and private data networking services. The evolution of SUNCOM is discussed by the State of Florida Department of Management Services in 1995:
The SUNCOM network has evolved over the years from a pool of intrastate WATS lines for voice communications into a truly integrated modern digital network capable of supporting: dedicated services such as agency specific networks; switched voice and data service for instate and nationwide communications; video teleconferencing services for hearings, meetings, and instructional purposes; dedicated radio control circuits for control of remote base stations and transmissions; and data communications enhancements such as SNA transport, protocol conversion, frame relay transport, and router transport services. [Florida Department of Management Services, 1995, p. 106]
The regional hubs shown in Figure 4-54 are often connected to a ring or other network design in the immediate area of Pensacola, Panama City, Tallahassee, Jacksonville, Gainesville, Daytona Beach, Orlando, Tampa, and Fort Myers, West Palm Beach, and Miami. Because this network was born in monopoly days, sprang up under heavy regulation, developed during pre-deregulation, and grew exponentially during de-regulation, some hubs such as Panama City and Daytona Beach might be artifacts of regulation due to LATA boundaries.
Even by 1995, (quite early in the development of ATM as a private data network), SUNCOM was becoming a converged network. The Department of Management Services describes SUNCOM in 1995:
The State of Florida SUNCOM Network is primarily an intra-state network comprised of eleven state-of-the-art-digital-switches, interconnected by fiber optic cables providing digital transmission trunking at rates up to 45 Mbps. SUNCOM is the government information highway in Florida. The network provides switched services for long distance voice and data, intercity dedicated circuitry, digital cross-connect services for integrating communications services onto the digital backbone, and nationwide switched network services using interstate access from each of the eleven switches. [Florida Department of Management Services, 1995, p. 95]
SUNCOM also encapsulates packet switching, video conferencing, and other hypercommunication services and technologies.
The ATM switch is the building block of an ATM network. All devices must connect to the ATM switch, but ATM switches can themselves be interconnected. Unlike packet switching, cell switching does not have to occur on packet boundaries so that ATM technology can easily handle a mix of real-time voice and video traffic along with less time sensitive data traffic.
ATM allows many hierarchies of service so that traffic can be segregated according to urgency and route, and can be priced accordingly automatically. Typically, packet-switched networks have been most useful for bursty traffic because by using statistical multiplexing on a packet-switched network the number of users served can be maximized for a fixed capacity. Circuit-switched networks have been prescribed as necessary for time sensitive traffic. ATM seamlessly handles both types of traffic making it one of the core technologies of convergence [Alley, Kim, and Atkinson, 1997].
ATM can be used to form a WAN backbone or it can be used for LAN connectivity as well. It is less cost effective as a LAN solution, because as Minoli mentions, "The economics of ATM have to be bifurcated at the LAN and WAN level, because they are different" [Minoli, 1997, p. 9].
ATM requires a variety of specialized equipment depending on the implementation (WAN, LAN, or both) that is in addition to the monthly cost of service charged by the carrier. The required CPE includes NICs, router boards, and DSUs. Together, these comprise ATM access equipment. ATM switch equipment such as private ATM switches, floor hubs (for ATM LANs), and carrier switching equipment must be deployed. ATM may also require specialized fiber optic transmission equipment including premises wiring or fiber, local loop fiber including SONET rings. Also required may be ATM internetworking equipment, video and multimedia equipment, and testing and network management systems and equipment [Minoli, 1997].
Two particular ATM cell formats are important. One is called UNI (User Network Interface). UNI carries information between the user and the ATM network. A second format is known as NNI (Network-Network Interface) which transports information among ATM switches. The UNI style connection could be used by an agribusiness on the access level, while the NNI format is used by carriers on the transport level of a network.
The main reasons for migrating to ATM for WANs include four needs: "more aggregate bandwidth, on-demand bandwidth, efficient use of (expensive) resources," and a "single network management system" [Kumar, 1995, p. 324]. Therefore, ATM QOS is easily monitored and controlled. Hierarchical service levels can be deployed quite easily using ATM with pricing able to change dynamically. ATM may become increasingly affordable for smaller firms and be able to carry the same mixture of voice and data traffic over the local loop that it does over the transport level for carriers.
4.8.4 SONET and Fiber Optic Networking Technology
SONET (pronounced sonnet) is hard to classify. Readers will notice that SONET has been mentioned throughout Chapter 4 beginning with Figure 4-22 where the high data rates it supports dwarfed those of all other transmission technologies, both wireline and wireless. Later, SONET was mentioned concerning fiber optic conduit in 4.3.1. It is important to note that SONET is not necessary to use fiber optic conduit at the local, access, or transport levels of the QOS reference model.
In 4.3.4 SONET was held out as an important example of a fiber optic backbone technology sold as a service to voice and data network providers. In section 4.7.3 (dedicated circuits), SONET's unique route diversity (due to the self-healing ring concept) was mentioned as the reason that communications would not suffer interruption if a fiber optic cable cut occurs. Now, SONET appears as a topic again. This time the discussion centers on SONET private data networking. Since SONET also was mentioned as a carrier technology for ATM in the last section, readers may be confused about the role SONET plays in agribusiness hypercommunications.
The answer is that SONET may play an increasingly important role in agribusiness hypercommunications. Indeed, it is already used by carriers for voice, data, and Internet transport. In the QOS reference model, SONET is both an access level and transport level technology. In the OSI model, SONET can operate from the physical layer up into the transport and session layers. These different functions (and the definition as a connection, technology, and service) are the source of the confusion. SONET may be sold as a glorified physical connection, in conjunction with ATM, as a WAN networking "value-added solution", or as an access and local level solution through which voice, data, and Internet convergence will be achieved.
Table 4-29 shows the SDHs or (Synchronous Digital Hierarchies) that are used to classify data rates of connections in much the way that T-carrier classifications are. Unlike T-carriers (which are defined differently in Europe and Japan than they are in the U.S.), SONET is an international system, making it easier for international traffic exchanges to be made. Indeed Sheldon encourages readers to "think of SONET as a means to deploy a physical network for a global communications system in much the same way that Ethernet . . . (is) used to deploy a LAN" [Sheldon, 1999, p. 903].
Typically, traffic is mixed within a SONET network. Therefore, while the OC-1 circuit has more capacity than 28 T-1s (43-45 Mbps), overhead and other traffic combinations contain more than a T-3 worth of DS-1 (T-1) traffic. From the table, it is clear that only the largest international agribusinesses are likely to use SONET, whether for a local connection or for an international network.
Although the speeds listed in Table 4-29 are for large-scale businesses, SONET is important even to much smaller agribusinesses for several reasons. First, agribusinesses need to know if their hypercommunication carrier uses SONET and, if so, to what level. This is important mainly for QOS reasons because SONET provides redundancy if service is disrupted at the transport level. Second, the backbone network can help an agribusiness understand why prices for what appears to be the same connection differ so greatly. AT&T's Florida backbone has enormous capacity when compared to the local ISP's T-1 line. That can be important if the agribusiness wants to reduce the chances of a service interruption or trust its traffic to a carrier that can afford not to overbook circuits.
SONET is important to agribusinesses for a third and final reason, the future. As recently as 1995, the backbone capacity of the entire Internet was a T-3 and SONET was unheard of. Now, OC-192 connectivity is shared by Internet carriers between the U.S. east and west coasts, and AT&T expects to have a national converged backbone that large by year's end. Less than ten years ago, an office with thirty employees might have only needed ten analog telephone lines. Firms of that size would never have been able to afford the astronomical prices of a T-1 or needed so much capacity. Now, with fax machines, enhanced telecommunications PBXs, computer networking, e-mail, office Internet use, and the need to host the firm's website, many firms that size would easily be large enough to require at least one T-1. Even if it increased its communications capacity to a size not imagined a decade ago, the same firm would probably spend a lower percentage of total expenses on communications.
Thus, as costs fall and communications needs rise, SONET may become a way to combine voice, data networking, Internet, and business web site traffic onto a single connection. With new services and a combined connection, the monthly bill for one vendor is smaller than the combined total from four separate bills in spite of more communications capacity and use. However, competing technologies such as DSL and fixed wireless may be able to handle the data rate of an OC-1 connection at less than one-fourth the price.
SONET is not the only all-fiber network that agribusinesses can use at the access level. All-fiber PON (Packet Over Network) or POS (Packet Over SONET) technologies may offer wireline fiber connectivity without agribusinesses having to pay the prices carriers expect to charge for currently envisioned value-added SONET networking. POS systems can obtain 25% better efficiency and use IP rather than ATM or other networking protocols. Whether SONET is used as a connection only, used with ATM as a converged services transport carrier, or sold as a value-added local access and WAN solution depends on how PON/POS end up being marketed to businesses.
Two other factors will limit or encourage SONET use by agribusinesses: first, whether the fiber infrastructure is in place, especially in rural areas; second, whether competing technologies (such as wireless) will provide cheaper solutions.
4.8.5 Wireless WANs
The subject of private data networking is not complete without mentioning the category's fastest growing segment, fixed wireless networking. While fixed wireless technologies were discussed in 4.4.2, the topic of wireless WANs is revisited here to remind readers that private data networks can use airwaves instead of copper, coax, or fiber to access provider POPs. In 4.4, a full discussion was made of wireless transmission technologies, from an overview of electromagnetic spectra (4.4.1) to a discussion of the chief terrestrial (4.4.2) and satellite (4.4.3) wireless technologies.
The growth of wireless WANs comes from an explosion in digital microwave that occurred before 1983. Between 1975 and 1980, US terrestrial microwave-installed mileage tripled from 165,000 to 500,000 [Stone, 1997]. Large private companies established point-to-point microwave data links, while new long distance carriers such as MCI and Sprint bypassed AT&T wireline links, using microwave for transport level paths.
However, the importance of wireless WANs for agribusinesses lies chiefly in their ability to bypass the wireline local loop from the agribusiness premises to the carrier POP. Even for agribusinesses that are located in areas where high-speed wireline connections are readily obtainable, wireless WAN providers are able to offer competitive rates for point-to-point connections. Indeed, so far, urban Florida is more likely to see the most promising fixed wireless technologies such as DEMS, LMDS, and WLL as described in Table 4-11 in 4.4.2
There are several other advantages of wireless WANs. Even fixed wireless WANs have a degree of mobility that wireline WANs do not because bandwidth upgrades or changes to the network do not require the lengthy installation and circuit engineering that wireline connections do. Typically, wireless WANs offer lower recurring charges than most wireline alternatives, though initial costs for specialized CPE may be higher. Wireless WANs tend to be more scalable and flexible when the agribusiness requires changes. Some organizations find that wireless WANs make sense as redundant solutions for existing wireline connections to be used during downtime or during periods that have unexpectedly high bandwidth requirements.
4.8.6 Support and Managed Services
Support and managed services for private data networking includes everything from completely outsourcing the operation and maintenance of a firm's data network to one-time network design consulting. Perhaps the most important support service a carrier, software vendor, or hardware maker can provide is technical support. Technical support for a private data network may include the ability to call, e-mail, and chat with support personnel twenty-four hours per day, seven days a week. Support may also allow customers access to diagnostic tools so they can assess whether problems are on the carrier end or at the agribusiness' premises.
Service providers offer many so-called value-added services so that agribusinesses can concentrate on business rather than on data networking. Managed services include basic network monitoring, end-to-end network management, recovery, security, as well as the purchase and installation of CPE.
Basic network monitoring is a service designed for agribusinesses that are able to watch over their networks with their own IT staff but require prompt notification from the carrier of potential physical layer CPE problems. Higher levels of network monitoring are also available, though typically these simply make it easier for agribusiness personnel to respond to, diagnose, and repair network problems themselves. Typically, network monitoring is offered to businesses with CPE that is compatible with the monitoring methodology.
Network management, the next category of managed services, includes proactive end-to-end service provider management of all CPE and carrier network devices for the agribusiness' private data network. Full end-to-end network management solutions are designed for firms with little or no internal staff to manage WAN connectivity. Customers receive detailed monthly reports showing the success rate of outsourced network efforts and usage statistics. Also included may be free equipment repair and replacement, fault isolation, and traffic analysis.
The ultimate form of end-to-end network management includes the services of an application service provider. This service frees the agribusiness of configuring e-mail, database, Internet software, and other software on their own computers. Instead, application service providers host all applications and data at their locations and the agribusiness requires only thin clients as CPE DTE instead of a complicated local network.
Recovery services include automatic backup of network data so that if there is a system crash, data can be restored avoiding the loss of critical data. Security services use outside vendors to monitor the network security, construct firewalls, and provide security recommendations so that the privacy of a firm's data networking is not compromised. Often, redundant circuits are provided to avoid downtime from system crashes, cable cuts, and the like.
The final aspect of support and managed services concerns the purchase, installation of CPE and DTE. While it is certainly possible to save money by purchasing edge devices and other necessary equipment from vendors other than the carrier, there can be several advantages in buying DCE from the communication service provider. For one thing, purchasing CPE directly from the carrier can help avoid the inevitable finger pointing that can result if the data network performs poorly or not at all. Carriers may frequently blame problems (rightly or wrongly) on equipment incompatibilities located on the agribusiness side of the demarcation point. In many cases, carriers may offer special pricing on recurring charges for customers who purchase equipment and installation services. Additionally, pricing for end-to-end network management for a network based on equipment bought from the carrier may be favorable. End-to-end management may not even be available in some cases unless DCE is bought or leased from the carrier.
4.9 Internet Service and Access Market
Of all hypercommunication sub-markets, the Internet may be the most important. One reason is Bill Gates' point that "Already the Internet's pricing model has changed the notion that communication has to be paid for by time and distance" [Gates, 1995, p. 97]. Another reason for the importance of the Internet is seen through David Crawford's realization that "Internet services are markets for two separate goods, bandwidth and information. The combination of the two is the market for communication" [Crawford, 1997, p. 379].
The Internet is a relatively recent phenomenon as a commercial communications medium. In 1993, Marc Andreessen and others at the University of Illinois developed the first web browser, Mosaic, making use of the IETF HTTP protocol and the HTML protocol. By 1994, Andreessen had founded Netscape. In 1995, Microsoft began its substantial involvement with the Internet. Internet-related business revenues have grown dramatically with US e-commerce totaling $127 billion in 1999, up from $51 billion in 1998. By the end of 2000, more than $284 billion in e-commerce transactions are expected to occur in the U.S. alone [Nua, 1999].
Figure 4-55 illustrates the exponential growth of Internet hosts, most of them business domains. According to the Internet Software Consortium, in January 2000, the world had 72.4 million hosts. Of those, 24.8 million were .com, 16.8 million .net, 6.1 million .edu, 1.8 million .mil, 1.0 million .org, and 0.8 million were gov [ISC, 2000]. The Internet has become truly an international medium with almost every country in the world having registered domain names, and only a few sub-Saharan African nations and North Korea without any Internet service at all.
The total worldwide Internet audience is difficult to estimate. One 1999 estimate pegged the number of worldwide users at 200 million people, 80 million of whom were US users [Tower, 1999, Ch.2, Part 2, p. 7]. In June 2000, worldwide Internet users were estimated at 332 million, of whom 147 million were in the US and Canada and 92 million in Europe. There are 75 million Asia-Pacific users and 13 million users in Latin America [Nua, 2000].
The Internet is often called simply a "network of networks", but as was mentioned in 4.8 the definition requires more precision. More specifically, the Internet is a wide-area, packet-switched network with an open architecture that uses TCP/IP. In more detail, here is IBM's definition of the Internet in 1998:
The Internet, sometimes called simply 'the Net', is a worldwide system of computer networks - a network of networks in which users at any one computer can, if they have permission, get information from any other computer (and sometimes talk directly to users at other computers). It was conceived by the Advanced Research Projects Agency (ARPA) of the U.S. government in 1969 and was first known as the ARPAnet. . . . because messages could be routed or rerouted in more than one direction, the network could continue to function even if parts of it were destroyed in the event of a military attack or other disaster. [www.whatis.com/internet.htm, p.1, last updated 10/13/98]
It took ARPA four years (from 1965 to 1969) to launch the original nodes at UCLA, Stanford, UCSB, and the University of Utah [Zakon, 1997].
The next step in the Internet's evolution from a federal defense project (ARPANet) to a commercial network came in 1983 when ARPANet was split from MILNET, which became a US DOD-only Intranet. Next, in 1986 NSFNET was created when the NSF (National Science Foundation) funded research in packet networks. Over time, NSF became the main agency responsible for the Internet. The NSF's role was to encourage exchange of research and ideas through Internet use by universities, scientists, and other professionals who used computers.
However, the Internet backbone outgrew NSFNET, growing from 56 kbps in 1985 to T-1 (1.544 Mbps) in 1988, and then to T-3 (45 Mbps) in 1991. In 1993, NSF created InterNIC to provide domain registration and other services. By 1995, NSFNET became a research-only network and most US traffic was routed through network providers such as MCI and AT&T with backbone data rates of 155 Mbps. The Internet was begging to become commercial.
Now, the Internet has a few dozen "Tier 1" carriers (also called NSPs, Network Service Providers), many with their own fiber optic backbones reaching up to OC-48 (2.48 Gbps) [Tower, 1999]. Tier 1 carriers have peering agreements with one another and with smaller ISPs to exchange traffic. Much of this exchange occurs at NAP (Network Access Points) such as MAE-East in Maryland and MAE-West in California. A recent proposal by Governor Bush calls for the establishment of a NAP in Florida. Many NSPs and ISPs already have substantial access points in Tampa, Miami, Orlando, Jacksonville, and Fort Lauderdale to carry traffic to and from Florida and certain points in the Caribbean and Latin America.
Internet services and technologies involve the very heart of hypercommunications. No other hypercommunication markets price services in such lumpy and un-standardized ways or have such diverse, rapidly changing technologies. Before discussing this point further, an overview of the Internet services and technologies to be covered in this section is made in Table 4-30.
With the list of Internet services and technologies in Table 4-30 already somewhat familiar, it would be easy to assume that most businesses are already using these tools to innovate and compete. However, while over half of Florida's 475,000 small businesses use the Internet, most:
use it in a rudimentary way, because only about 10 percent use the Net for regular transactions with customers or suppliers, the applications that create efficiencies, cut costs, increase productivity, and improve service to customers. [Ackerman, 1999, p. 4]
Ackerman and others see the real potential of the Internet as a communications tool for small businesses (like many agribusinesses) with a significant potential for economic growth for businesses that take advantage.
4.9.1 Internet Access and Transport
The first important aspect of the Internet market is access. Internet access refers to the connection or loop an agribusiness firm uses to reach the POP of an Internet provider. As with other forms of hypercommunications, Internet access may be wireless or wireline. Since agribusinesses are assessed two charges for Internet service, it is common to consider Internet access from the point-of-view of both the backbone connection (ISP charge) and the access level connection (carrier charge).
Figure 4-56 depicts the typical situation. The access level connection is over the local loop from the agribusiness location to the ISPs POP. In the simplest case, access is over a POTS telephone line used by a modem for dial-up Internet access. The backbone connection refers to the connection from the ISP to the Internet. The ISP may purchase the backbone connection from an NSP, a larger ISP, or another carrier.
In the simplest case, a dial-up customer pays the telephone company a monthly fee for the telephone line along with any toll charges for the time the modem is connected to the ISP's modem bank. Dial-up customers of the ISP pay the ISP a charge for Internet access, typically a fixed monthly charge for unlimited use. Hence, there are two charges: one to access the ISP and one to access the Internet.
Agribusinesses may use a number of methods to access the Internet beyond a simple modem over an analog telephone line. Internet access methods are summarized in Table 4-31. The technical details of each kind of connection have been discussed at least once elsewhere as noted in the table. However, all Internet connections connect the agribusiness to the ISP POP. Typically, there is a fixed monthly Internet access charge which (while based on the capacity of the ISP POP to agribusiness premises connection) is really a charge for the Internet that the agribusiness uses on the ISP to Internet (NAP). Just as dial-up users pay for both a telephone line and an Internet connection, other wireline forms of Internet access may incur a second charge for the access connection, which is purchased from a telco or cableco. At this stage, the discussion does not include access and backbone connections for agribusinesses that host their website on their own premises; that point will be covered in 4.9.3.
Depending on conditions at the ISP, the access connection provider, or the Internet itself, the throughput rate from the agribusiness to the Internet may or may not be equal to the capacity of the access connection at a particular time. For example, if an agribusiness purchases a T-1 dedicated access connection, it might find that it would fail to obtain that data rate between the ISP and the Internet NAP. This situation could occur for several reasons.
First, an ISP (like other communication providers) estimates demand for its own backbone connection to the Internet based on the probable customer traffic loads. Just as with line consolidation in the local loop (where perhaps eight to sixteen telco customers out of one hundred can use the telephone at once), consolidation ratios are employed by ISPs.
|Method||Sec.||Characteristics||Data rates (symmetric unless noted)|
|Dial-up||4.2.2, 4.3.2||Slow, connection establishment and failure delays||33.4 kbps (up), 56 kbps (down)|
|Dedicated 56k||4.3.2||Not a modem, DDS||56 kbps|
|Fractional T-1 (DS-0)||4.3.2, 4.7.3||Always-on||64 kbps, increments|
|T-1 (DS-1)||4.3.2, 4.7.3||May be available in dynamic, burstable form for Internet. Tier 2 ISPs may not be able to guarantee rate||1.536 Mbps|
|T-3 (DS-3)||4.3.4, 4.7.3||Sold by tier 1 ISPs only||45 Mbps|
|Cable modem||4.3.3, 4.7.3||Possible overbooking of shared broadband circuit||6 Mbps (down), 1 Mbps (up), varies widely|
|Frame Relay||4.8.2||Full data rate not dedicated||DS-0, Fractional T, DS-1|
|ATM & SONET||4.7.3, 4.8.3, 4.8.4||Becoming available to individual businesses at slower rates. Fiber optic cable and DCE needed||OC convention, OC-1=51.84 Mbps, OC-256=13.271 Gbps, see Table 4-29|
|ISDN-BRI||4.7.5||Pricing can be based on time connected||128 kbps|
|ISDN-PRI||4.7.5||So-called smart Ts that include PSTN, data, and Internet are ISDN-PRI||1.472 Mbps|
|x-DSL||4.7.3||Availability and speed are highly sensitive to distance from CO, see Table 4-21||ADSL: 640 kbps (up), to 8 Mbps (down)|
|G.Lite: 64-512 kbps (up), 1.5 Mbps (down)|
|HDSL: 1.5 Mbps|
|VDSL: 1-20 Mbps (up), to 51 Mbps (down)|
|Mobile wireless||4.4.2, 4.7.5||Subject to fading, interference, etc.||2G: 9.4-64 kbps
3G: 300 kbps-2Mbps
|Fixed terrestrial wireless||4.4.2, 4.8.5||Varies depending on frequency, see 4.4.2. No need to pay twice as with wireline.||MMDS: to 10 Mbps|
|LMDS: 20-50 Mbps (down), 3-10 Mbps (up)|
|DEMS: To 30 Mbps|
|WLL: 45-155 Mbps|
|2.4 GHz: 156 kbps- 11 Mbps|
|Fixed satellite wireless||4.4.3||High latency. No need to pay twice as with wireline.||2 Mbps (down), 33.4 kbps (up), higher in future|
In the modem case, the ISP may have from eight to forty customers per actual dial-up access line. Further consolidation occurs as the ISP plans the wholesale backbone capacity needed to reach the Internet from the ISP location. An ISP has to overbook both access traffic and backbone traffic, though the situation becomes complex when various combinations of circuit-switched, dedicated, and even packet-switched connections are considered. Generally, the ISP connection to the Internet (ISP backbone) capacity is most limited because the ISP's profits would evaporate if it assigned every customer their maximum data rate and purchased wholesale bandwidth accordingly.
However, once traffic reaches the Internet, there is no guarantee that the access rate will be reflected in the data rate experienced over network segments over the Internet and noticed by users as part of their throughput rate. Hence, it can be difficult or impossible for an agribusiness to tell if the ISP has overbooked its circuits or if slow download speeds are simply a result of Internet congestion. Internet congestion in turn can be systematic or due to congestion at the distant site. The traceroute and ping utilities (see 4.9.6) or proprietary software can identify the location of a problem, but results are dependent on the site at the other end.
Another reason that the data rate of a wireline access connection may not equal the rate of an Internet download is not even within the ISP's control, nor can it be blamed on the Internet. Instead, the cause can be in the carrier's access level network. Recall that broadband transmission technologies share bandwidth among users so that if many cable modem users are simultaneously online, congestion in the access loop can slow connections down even if the cableco has ample Internet backbone capacity. Similarly, while carrierband and baseband connections are not shared, they may share ports, switches, or other intermediate DCE that can create bottlenecks. Additionally, noisy line conditions and or atmospheric interference (mainly in the wireless case) can prevent the access connection from attaining maximum speeds.
In addition to access, most ISPs offer various levels of access or service plans, often related to both the speed of the connection circuit and the amount of traffic from the ISP to the Internet to be expected. Table 4-32 shows several levels of Internet access.
Importantly, some IP & OS related features are not available to certain account levels, at certain ISPs, or may be subject to extra charges. See 4.9.2, 4.9.6, and 4.9.7 for specific examples. The text e-mail level refers to providers who offer dial-up connections or wireless hookups through which text e-mail may be downloaded. Customers do not have Internet access, only the ability to receive and send e-mails. E-mail is checked during transitory sessions where users are connected only as long as it takes to send and receive their e-mail. Users read and compose their e-mail while offline. Typically, such dial-up e-mail only services are free, with costs underwritten by advertisers whose banner messages are displayed while the e-mail software is being used.
The next connection level is called level 0 to denote that it is not actually an Internet connection at all, but a dial-up connection from the user to an OSP network and possibly through that network to the Internet. AOL and other online service providers have their own Intranets (reserved for their customers) from which they receive advertising revenues and control the content and design. Much of the customer's online time is spent on the OSP Intranet, though customers can browse the Internet as well. OSP web browsers are often unable to view material on many websites, though this issue is expected to be resolved by 2001 at AOL through its Netscape acquisition. Level 0 connections may not be able to send and receive certain kinds of e-mail and do not have access to OS & IP applications and services (discussed in 4.9.7).
Level one accounts are the most common kind of Internet access. They feature dial-up access with a dynamically assigned IP address. Depending on the ISP and access plan these may be unlimited access or limited access. Limited access accounts have the right to log on for a certain number of hours per day, week, or month for a minimum monthly charge. Unlimited access accounts do not have a maximum number of hours, but pay a higher monthly charge. However, unlimited access accounts can be "timed out" if the connection is left open for a certain time without any user action. Level one accounts typically have an e-mail account at the ISP's .net or .com domain but do not have the right to store e-mail files on the ISP server. Level one access may also include shell access, limited technical support, and the ability to use news and FTP.
Level two accounts are aimed towards the SOHO (Small Office Home Office) market. While they are not truly dedicated access in the sense of having a continuous connection or dedicated IP address, these dial-up accounts can be configured in several ways that help SOHO users. One feature they include is the ability to connect to the company Intranet, WAN, and e-mail server. Level two accounts may have better technical support, a personal web page, and the right to leave e-mail messages on the ISP server up to a certain number of megabytes.
Level three accounts offer a single dial-up computer the ability to have a dedicated IP address. Therefore, such accounts are able to join VPNs (Virtual Private Networks) so that cyber commuters or small offices can be part of the company network. Level three accounts allow the user to run certain software or utilities compatible with TCP/IP to chat, exchange files, e-mail, exchange video files, or possibly to use Internet telephony. Level three accounts are likely to be able to relay e-mail to the account from addresses on other domains, gain access to remote e-mail accounts, and use the full range of CGI, JAVA, and other scripts for particular applications.
Dial-up access is not necessarily a local call since it depends on how local calls are defined and how extended calling zones are constructed (see Figure 4-36 in 4.6.1). In some rural areas of Florida, dial-up access may be available only as a long-distance call. Even when available in rural areas, Internet service may not be available at 56 kbps speeds because of poor line conditions, large distances from subscribers to COs, and lack of digital lines for ISP to CO connections.
Level four accounts have a dedicated IP address just as level three accounts do, but they also have a dedicated (non dial-up) connection. With this type of account, an agribusiness is connected full-time to the Internet and could host its own website if it had sufficient capacity and the appropriate CPE. Recall that there can be a difference between the capacity of the access connection (from the agribusiness to the ISP's POP) and the backbone bandwidth (from the ISP to the Internet backbone). Most level four accounts entail a "best effort" commitment from the ISP to allow the accountholder a particular capacity from customer premises to the Internet itself.
Level four accounts require a dedicated host computer or computers so the agribusiness must maintain system software and security, upgrade applications, troubleshoot connections within the business, etc. For this reason, many smaller agribusinesses rely on a series of individual dial-up accounts rather than having their own host server. Level four accounts can often drop the monthly cost of Internet access (both ISP and access connection bills) for agribusinesses with many level zero through level three accounts, but there are higher setup costs. If the agribusiness has its own website, VPN, Intranet, or is required by suppliers or customers to have Extranets, a level four account can make a great deal of sense. More discussion about web site hosting (and level four access) is found in 4.9.3.
The final kind of account, level five, is not an ISP-style account at all. Here, the agribusiness acts as its own ISP with a direct connection to the Internet. Such a solution is not practical for all but larger agribusinesses and will always involve consideration of web hosting and other Internet services. Level five access could be obtained through a Tier 1 ISP via a direct link to facilities owned by that firm.
Internet access may include recurring monthly charges, usage based fees, as well as one-time installation and equipment costs. There will always be one charge for the access connection (often paid to an ILEC or ALEC) and another charge for the Internet access itself. Recurring charges are typically based on the capacity of the access connection (for example a set charge monthly for a T-1 connection). However, some access plans include charges for the amount of traffic transported as well. This becomes a particularly important issue if the agribusiness hosts its own web site. Internet connections require the appropriate CPE and edge devices needed for the access connection as well as routers and specialized servers and software that might be needed.
As was mentioned in 4.8.6, managed services such as managed router services and other features may be arranged for separately with carriers for Internet service just as with private data networking. Outside firms can be hired to manage routers, servers, and security. The need for security policies is returned to in 4.9.8.
Florida has had its share of innovative Internet companies. For example, Cybergate of Broward county (now E-spire) began Florida's first cyber store in 1992 with a Gopher site called CyberStore on the Shore [Resnick and Taylor, 1995, p. 384]. Since then, the ISP market has experienced dramatic growth along with major structural changes. While the market has seen numerous mergers and acquisitions in the last two years, there are still over a thousand ISPs that are active in parts of Florida. The Tier 1 ISPs (also called NSPs) that are active in Florida, along with costs and availability, are covered in Chapter 7.
It should be emphasized that the Internet is a communications medium, not simply an advertising medium for web pages. A single e-mail is far less expensive than a personal sales call, snail mail business letter, or local or long-distance telephone calls. However, some businesses almost ignore e-mail when communicating with customers.
E-mail's potential importance has been widely discussed:
According to Book Marketing Update in Fairfield, Iowa, as much as 75% of business-to-business correspondence will take place by fax or e-mail by the year 2000. Probably half of all consumer-to-consumer and business-to-consumer correspondence will be through fax or e-mail, predominantly the latter. [Resnick and Taylor, 1995, p. 377]
While this prediction has not yet become true, an enormous amount of e-mail is sent. According to one source, the total volume of U.S. e-mail surpassed one trillion pieces in 1998 [Ackerman, 1999]. By mid-1999, the number of e-mails sent each day in the United States is over 9.4 billion. However, over seven billion of those messages represent spam or unsolicited junk e-mail [Internet Week, February 8, 1999, p. 17]. Other sources such as the firm eMarketer.com suggest that spam comprises fewer than ten percent of all e-mail messages.
E-mail has become more versatile if defined to include interactive messaging and chat:
For many Internet users, electronic mail (e-mail) has practically replaced the Postal Service for short written transactions. Electronic mail is the most widely used application on the 'net. You can also carry on live 'conversations' with other computer users, using IRC (Internet Relay Chat). More recently, Internet telephony hardware and software allows real-time voice conversations. [www.whatis.com/internet.htm, p.1, last updated 10/13/98]
When Queen Elizabeth sent her first e-mail in 1976, most e-mail services were within closed networks. Telnet (the first public, closed) packet data service was opened by BBN in 1974 [Zakon, 1997, p. 3]. It would not be until the 1990's that business e-mail could travel among different packet data services, which by then included OSPs AOL, CompuServe, and Prodigy. USENET newsgroups were begun in 1980 at Duke University and UNC.
E-mail services and technologies include such things as multiple addressing, auto-responders, LISTSERVs, web forms, and e-mail enhancements such as voice and video. E-mail can be delivered to wireless handheld devices, pagers, fax machines, and mobile telephones in addition to computers. Voice-to-text technologies exist so that e-mail is now accessible from any telephone in the form of a computer device speaking aloud text messages. Voice responses can even be converted into text e-mails via telephone or desktop and sent via the Internet to any e-mail address. Faxes can be converted into e-mail and e-mail converted into faxes.
E-mail reaches an agribusiness from its ISP, OSP, NSP, or e-mail service provider. Level zero through level three Internet access includes e-mail from the ISP, but agribusinesses with level four or five access may use a specialized e-mail service provider or have their own on-site e-mail server. Whichever is the case, it is important to configure the e-mail system carefully. Unless level three or higher Internet access has been purchased the agribusiness may only get a single e-mail address per Internet account. Some ISPs will allow more than that number, some will not.
This sub-section covers four major issues regarding the use of e-mail in agribusinesses. First, one important use of e-mail within a business is as a sales or communication tool. If the accounting costs of a message are considered, sending an e-mail is far less expensive than sending a fax, writing a business letter, mailing out sales literature, or making a long-distance call. However, there are social and economic implications to be kept in mind as well that currently limit e-mail's usefulness to agribusinesses.
E-mail has several characteristics that make it an excellent communications tool. One of these is the instantaneous nature of e-mail. This can be an excellent feature, as for example, when communicating with overseas customers or suppliers at a tiny fraction of the cost of an overseas telephone call. However, since it is almost instantaneous, e-mail also leads to the expectation that e-mails sent to an agribusiness will be answered quickly. It is particularly unprofessional looking to computer savvy customers if an agribusiness answers their e-mails so slowly that a snail mail (regular postal) letter would have obtained faster results. It is common for e-mails to companies to go completely unanswered. This is particularly unforgivable given that in many of these cases no expense has been spared to connect the firm to the outside world via state-of-the-art networking technologies and e-mail inquiries come in daily from an elaborately designed web site. The agribusiness must make sure to change job descriptions and responsibilities so that someone in the firm is responsible for e-mail.
A second issue about e-mail concerns some of the dangers (real or imagined) of e-mail use. E-mail has been criticized as to its usefulness in business because of characteristics such as being impersonal, unreliable, quasi-anonymous, and without legal standing. There is some truth to each accusation, but agribusinesses can adopt prudent strategies for dealing with each one as technology improves e-mail further. E-mail does not have to be impersonal if the agribusiness personalizes responses. The reliability of e-mail messages is not perfect, but by shopping around for a high-quality ISP or mail server, many problems can be prevented. For greater security, e-mail can be encrypted and return receipt features can give some idea of whether messages that are sent out are received.
While e-mail is a mission critical application for many businesses today, it can still be unreliable. Of particular importance is the ability to retrieve e-mail if a mail server or access connection goes down. There may be no way (other than possibly through exhaustive research) to tell whether a particular ISP's e-mail handling is likely to fail. However, e-mail designated for the agribusiness should be automatically removed from the e-mail server and placed on a local machine or LAN server at the agribusiness location automatically as a matter of course. If there is a connection problem, a redundant method of e-mail access (such as a dial-up line, etc.) should be ready and available to use to provide redundant ability to download personal and organizational e-mail. The importance of e-mail suggests that each agribusiness automatically backup e-mail messages as well so that if a machine crashes or an employee leaves there will be a record of correspondence left behind.
It is true that the identity of a person is not established absolutely by an e-mail address. Of the approximately 80 million Americans who have ever used e-mail, each has an average of almost three addresses. Free e-mail services such as hotmail, Juno, and Yahoo! mail are both notorious for making it easy to camouflage customer identities. It is certainly possible that competitors can pose as customers in e-mails or other subterfuges may be used. That is why proper Internet use policies (to be discussed more in 4.9.7) are important. Employees should understand that even when they are sure of the identity of an e-mail correspondent unencrypted mail could still be intercepted. When unsure of an identity, discretion should be used.
The legal standing of e-mail comprises at least two issues. First, as the Clinton administration found, even deleted e-mails can return to haunt their authors. E-mails that detail illegal activities, sexual harassment, racial discrimination, or other practices can be used as evidence against the company [Overly, 1999]. A second issue concerns whether e-mail carries the legal weight that a letter does. New digital signature technologies can be used to give an e-mail signature the weight of a paper signature for many domestic business transactions. However, an attorney should be consulted for specific advice.
With these caveats in mind, e-mail does have many uses, especially if the audience is computer literate, responding from the Internet, and employees are groomed to use it regularly. According to empirical studies by Kraut and Attewell, "employees who used e-mail extensively, net of their communications over other media, were better informed about their company and more committed to its management's goals" [Kraut and Attewell, 1997, p. 323]. Many Florida agribusinesses report interstate and international inquiries and orders because of the low cost of e-mail communication and positive symmetry with web sites.
A third issue concerns the uses of internal e-mail for the agribusiness. Employees can be kept advised of day-to-day happenings, new products, and other company news. A single e-mail can be automatically sent to every employee in the company with little effort. This can be a double-edged sword because if employees are bombarded with too many in-house messages of various degrees of importance to them, they may end up not reading any "internal spam" [Anderson, et al., 1997]. One firm found that as many as twenty separate employee e-mailings were indiscriminately sent out each day resulting in less employee knowledge rather than more since over thirty minutes of employee time was spent in wading through the messages. That practice was replaced by a daily e-mail list (with items targeted to specific groups) so that at most each employee got a single e-mail with items personalized to their job description. Not surprisingly, the new daily newsletter e-mail achieved better readership.
A fourth thing for agribusinesses to realize about e-mail is that e-mail includes many kinds of message services. If an agribusiness hosts its own e-mail domain (has level four or level five access), its e-mail options will depend on the software included with the e-mail server. If the agribusiness has a series of individual dial-up accounts, it may have to pay extra for certain e-mail services if they are available at all. E-mail addresses fall into two categories. The first type are user name addresses that correspond with an Internet (or private data network) user's name. The second type is alias addressing. Alias addressing allows e-mails addressed to virtual addresses such as "firstname.lastname@example.org" or "email@example.com" to be received by the appropriate person in the organization.
Other e-mail services include auto-responders, LISTSERVs or mailing lists, and even specialized programs that filter out spam or automatically route e-mails. Auto-responders are programs that automatically respond to any e-mail message sent to a particular address. The typical form of an auto-response is to acknowledge receipt of the original message, along with a promise to respond quickly or a list of contact e-mail addresses in the company. Unless accompanied with sophisticated technologies capable of analyzing the subject of a message, auto-responders are meant only to acknowledge the receipt of an e-mail and assure a correspondent that a personalized response is on the way. Some organizations use auto-responders to form reply to all e-mails, forcing the correspondent to e-mail a response before reaching a human being. This can annoy customers.
Auto-responders are also used with webpages and HTML forms to send customers more information automatically. For example, if after having read a website description of products A, B, and C, a prospect fills out a form requesting detailed information on product A, an auto-responder can send a form letter out with the information needed. This action is most effective when a copy of the inquiry is auto-forwarded to the sales department so a salesperson can contact the prospect later to close the sale or make sure that the information was satisfactory. It is customary to find out from the prospect on the web site form what the desired form of response (telephone, e-mail, letter, personal visit) from the company (beyond sending the information) would be. That way, prospects are not hounded by unwanted telephone pitches or spam.
Another kind of e-mail service is spam filtering and correspondence filtering software. Many e-mail packages offer limited spam filters and special packages can be obtained that filter spam out as well. Unfortunately, these packages can also filter desired messages as well. Correspondence filtering software takes messages that come into main e-mailboxes for a firm and automatically attempts to route them to the appropriate person in the organization. Such packages are expensive enough ($50 to $75 thousand) that they are priced beyond the budgets of most agribusinesses.
4.9.3 Domain Names, IP Addresses, and Web Site Hosting
While the Internet has many other communications uses for agribusiness, it is often erroneously considered as a synonym for the World Wide Web (WWW) and specifically for business web sites.
The most widely used part of the Internet is the World Wide Web (often abbreviated 'WWW' or called 'the Web').
Using the Web, you have access to millions of pages of information. Web 'surfing' is done with a Web browser, the most popular of which are Netscape Navigator and Microsoft Internet Explorer. The appearance of a particular Web site may vary slightly depending on the browser you use. Also, later versions of a particular browser are able to render more 'bells and whistles' such as animation, virtual reality, sound, and music files, than earlier versions. [www.whatis.com/internet.htm, p.1, last updated 10/13/98]
In March 1995, web traffic surpassed FTP (File Transfer Protocol) requests as the main source of Internet traffic. Now, about 75 percent of Internet traffic is WWW [Ebbers, 1999]. The importance of the web site to agribusiness communications, marketing, and advertising strategies lies in several related areas.
First, the agribusiness must choose domain names and make a determination as to where to host the web site(s) associated with each domain name. Second, based on the Internet access decision, e-mail choices, and web hosting decision, there will be one or more IP addresses assigned to the organization. These two topics are covered in this sub-section.
However, the discussion of the importance of the web site to an agribusiness does not end there. Web site design and site maintenance are topics in 4.9.4, while web site promotion and measurement are mentioned in 4.9.5. Of course, no single part of the Internet area is entirely divorced from the web site since Internet access (4.9.1) and e-mail (4.9.2) decisions are likely to involve the web site as well. E-commerce and customer service (4.9.8), security and privacy policies (4.9.8), and broadcast content production (4.9.10) also stem from the web strategy.
Given the importance of web sites to overall success on the Internet, it is surprising how little attention is paid to the cheapest element (some would say the most important) of a business Internet strategy, the choice of domain names. Every address on the WWW (it will be remembered from 4.5.2) uses the HTTP (HyperText Transfer Protocol) to transfer an IP address into a recognizable domain name. Under Internet Protocol version 4, an IP address is "a set of four 'octets', or 8 bit numbers from 0 to 255, separated by periods, that define a unique host on the Internet" [Israel, 2000, p. 5]. Internet hosts include CPE edge devices, DTE, and a variety of routers and intermediate DCE at the local, access, and transport levels. Instead of having to type in the quad notation http://188.8.131.52 to reach the A. Duda and Sons corporate website, all an Internet viewer needs to do is type in www.duda.com in the location bar of their web browser.
Before discussing domain name strategies, it is necessary to develop the uses of IP addresses in general and apply them to agribusiness needs. A shortage of addresses along with other concerns has led to the development of newer 128-bit IP addresses in IPng (IP next generation or version 6). The new protocol has 4 billion times 4 billion more addresses than version 4, so that it is less likely addresses will become in short supply. IPng is expected to increase the almost exhausted supply of certain kinds of IP addresses in the near future. For agribusinesses now, version 4 addressing is the rule.
Agribusinesses may use IP addresses for websites and (if they have level four or level five access) for some or all CPE (both DTE such as computers and DCE such as routers or specialized servers) on their own premises. IP addresses are assigned by classes as shown in Table 4-33.
If the agribusiness has dial-up Internet access with IP addresses for computers that access the Internet assigned dynamically, the only IP address it owns is the web site address. Often, a smaller agribusiness will have but one or two dedicated IP addresses (rented from an ISP) with level three or higher Internet access. With level four or five access, the agribusiness may have one or more class C address blocks for routers, e-mail servers, gateways, and even for individual workstations in some cases. The agribusiness' addresses will be of the quad notation form C.network.network.local where C indicates a class C prefix, n denotes network levels (such as NSP and ISP) and local indicates machines on their own network. Class B addresses are typically assigned to ISPs or large organizations such as state governments or universities. Quad notation for class B addresses will be B.network.local.local to show that Class A addresses are typically held by national governments, the military, or NSPs (Tier 1 ISPs). Class D (multicast) addresses are discussed in 4.9.10.
To be connected to the Internet, a particular computer or device at an agribusiness is often behind a proxy server, DHCP server, or firewall so that it uses the last entry in Table 4-33, an un-routed IP address. Such un-routed IP addresses obtain pseudo-anonymous Internet access in several ways. First, they may use DHCP (Dynamic Host Configuration Protocol) where workstations share locally assigned IP addresses with each other. When DHCP machines communicate with the Internet, a DHCP server assigns an available IP address (from the set exclusively available to the firm) to the machine temporarily. Idle machines do not take up IP addresses if DHCP is used, therefore conserving an organization's class C addresses. Proxy servers are another way for machines with un-routed IP addresses to communicate over the Internet. Proxy servers operate on a small block of IP addresses, and can act as firewalls, news, mail, and web servers while providing proxy connections for hundreds or thousands of internal users [Israel, 2000].
Returning to the subject of website addresses, agribusinesses can have multiple domain names pointing to the same web site. In this way, when web viewers enter variations of the domain name, common misspellings, or product names they automatically reach the intended site. Often, names are held in reserve so that the company will have its corporate name, trade names, product names, and the names of associated ventures protected. For example, Duda and Sons owns two other names, viera.com for the residential community it is developing (Viera, Florida) and redicel.com for Duda's Redicel brand of processed celery. Currently, Duda operates both the main Duda site and a separate site for Viera, but the redicel.com domain name is parked and registered.
Registering a web domain name makes sense for $30 to $50 per year (and as low as $200 for 10 years). This is a small price to pay to ensure that brand names for future web sites remain available for the agribusiness to use. Domain name parking is often sold as a monthly service by ISPs, who may charge up to $30 per month to guarantee that a particular reserved name will not be stolen if it is not used. Generally, the parking of a domain name with an ISP is not necessary to retain it.
Complicated legal questions are involved if individuals buy domain names in hope of selling them to companies that have legitimate claim via trademarks or other legal rights to the name. This practice is known as cybersquatting and legislation has been enacted recently to prevent the practice of purchasing protected domain names solely for resale (often for extremely high prices) to the party with legal right to the name. Before the law, Harley Davidson motorcycles purchased the hardleydavidson.com domain from a Michigan dealer for an undisclosed sum. Until then, the main corporate website was found at harley-davidson.com. Domain brokers can do an excellent (and legal) business trading in non-trademarked domain names. Recently, egg.com was sold by a U.S. broker to Prudential for over $100,000. If an agribusiness finds that its legal operating name is being "held hostage" by an individual who registered the name years ago in hopes of selling it to the agribusiness, the agribusiness may be able to take legal action.
In other cases, another organization may have been making legitimate use of the name and an agreement may be made that is satisfactory to both parties. Archer Daniels Midland (ADM) uses admworld.com as its main corporate site but allows adm.com (which it also owns) to be used by the American Direct Mail Corporation. ADM (the agribusiness) also owns the following domain names: adm.com, ad-m.com, admhold.com, admweb.com, admgrain.com, novasoy.com, admhealth.com, admfood.com, admfeed.com, and marthagoochpasta.com, d-alpha.com, nutrasoy.com, and many others. Multiple domain names are frequently held in reserve so that various divisions of the firm may eventually have their own web site or e-commerce presence. Small agribusinesses can easily afford the price of registering more than one domain name. It is far easier to protect a name by registering it first than to attempt to deal with a cybersquatter through the courts or lose a profitable non-protectable domain name idea to a competitor.
A popular tactic is to register domains based on non-trademarked keywords that consumers use themselves when searching the web. This approach can often help a web site become more easily found by viewers and more easily promoted to search engines as well. Some agribusiness examples include: grocer.com (a site selling links to online grocers), nursery.com (the website of Minnesota's Bailey nurseries), wool.com (Woolmark), lettuce.com (Nunes Company), beef.com (Big Sky Beef Co-operative of Montana), florida-oranges.com and floridaoranges.com (Bob Roth's New River Groves), florida-juice.com (Key West, Inc. of Fort Myers), and floridajuice.com (Department of Citrus).
From these examples, it can be seen that there are many possible combinations of domain names. Formerly, domain names were restricted to 26 alphanumeric characters or less, but recently, domains of up to 63 characters have been registered. Domain names cannot consist of numbers alone nor have any non-alphanumeric character other than a dash. Long domain names may have multiple dashes to make them easier to enter, but it is generally better to keep the name short. In some cases, extremely long domain names cannot be reached from all computers on the Internet and may not be searchable by certain search engines.
Domain names are divided into top-level domains (TLDs), and at least one (but often more than one) sub levels called second and third-level domains. For example, in the site address http://www.farmphoto.com, the .com is the TLD, farmphoto is the second level domain, and www is the third level of the domain name. The most frequently seen TLDs are shown in Table 4-34.
Sources: ISC Domain Survey, January 2000, www.isoc.org.
Most agribusinesses try to choose simple names such as businessname.com, business-name.com, but since there are over 24 million .com TLD addresses already, it is often necessary to select a different TLD. Traditional restrictions on .net and .org registrations have been lifted so that now U.S. agribusinesses may choose businessname.net or businessname.org. However, many of the descriptive second-level domain names such as beef have already been taken. For example, Beef.com is owned by the Big Sky co-operative as mentioned above, beef.org is the National cattleman's Association site, and beef.net is owned by Web Magic of Pasadena, California.
An agribusiness wishing to use beef as its second level domain would still be able to register as beef.cc, beef.to, or beef.tv or use other TLDs from countries that are willing to sell to foreign firms. A cottage industry has sprung up in several developing nations selling domain names, but do not expect that beef.jp (Japan) or beef.ca (Canada) would be so easily purchased since each country determines its own policy. The .us TLD could be used, but only if the agribusiness (whether located in Ona, Florida or not) registered as beef.ona.fl.us or used some other location.
Another option is to use the desired second-level domain name as a third level domain name through arrangement with an ISP or web hosting company. To return to the beef example, the result would be beef.ispname.net or beef.hostco.com. If the agribusiness has reserved the appropriate domain name(s), the next decision is how to host the web site. Once the domain name(s) have been chosen, they must be registered with an approved domain registry such as www.networksolutions.com or through the ISP or web hosting company.
The decision of where to host an agribusiness' web site is a classic make (self-host) or buy (use a hosting company) decision with a third option, collocation. Figure 4-57 illustrates three main options. Dotted lines show connections over which viewer-website traffic flows while bold lines show connections that carry normal agribusiness Internet traffic.
In each case, to reach a website, the end-user (also called a viewer or visitor) types a URL (domain name) in their browser's location field. The end-user's ISP finds the IP address associated with the URL from a primary DNS (Domain Name Server) located on the Internet. Next, the IP number given by the DNS server is used to request the default HTML web page (typically called the index.html file) the IP address points to. That file is sent (in packets) back to the IP address associated with the end user.
Dotted arrows show how web site traffic traverses the Internet under the three hosting options. Note that for all three options there are at least seven points (called hops) that handle the WWW request for web sites hosted at the agribusiness. Each of these represents a possible congestion bottleneck that can slow the transfer down. The agribusiness has control of at best two of these.
The first connection is the viewer's own access level connection to the end-user's ISP. If the access line is noisy (such as a rural POTS line) or overbooked (such as a broadband cable connection) the website may not load quickly. The next potential bottleneck is at the ISP's POP where routers or other equipment can also slow the transfer. A third path taken is over the viewer's ISP backbone connection to the Internet. If this connection is congested, again the website contents may display slowly on the end user's machine. It is often the case that the ISP passes off traffic to several hops before a NAP (Network Access Point) actually reaches the Internet since many ISPs resell capacity of larger ISPs or NSPs (Tier 1 providers). These intermediate hops can further delay transmission.
Once the message reaches the Internet, it is transferred from one NSP to another through peering agreements. A number of intermediate stops again occur, with each hop representing another possible slowdown. Then, at a NAP serving the NSP which the agribusiness' ISP uses, the traffic leaves the Internet, possibly traveling through an additional number of potentially congested hops before reaching the agribusiness's ISP POP. Next, the web page request travels over the agribusiness's Internet access connection (or specialized web site access connection) to reach the web site if hosted at the agribusiness's location. It is up to the agribusiness to select an access connection for website traffic that is capable of handling traffic loads.
It can be easy to over or under estimate traffic levels. One celebrated case occurred when the house.gov site issued the famous Starr report concerning President Clinton. The volume of requests (over 22 million users from 11 a.m. and 4 p.m. on September 11, 1998) caused msnbc.com, cnn.com, yahoo.com, and the house.gov web site to all crash. Typically, this volume of traffic will not be seen at most agribusiness sites. However, too small a connection can easily be overwhelmed.
It is time to consider the three web hosting options in more detail. First, the firm can purchase a web server for use on its own premises, lease an Internet connection designed to handle inbound traffic level to the site, and set up the appropriate security policies (upper left of Figure 4-57). Second, the firm can lease space from an Internet hosting company on their server, based upon the hosting company's backbone capacity and expertise with websites (right middle of Figure 4-57). Third, the firm can purchase its own server and house it (collocate it) at an ISP's premises, based upon the ability to access the server for maintenance and the security of the collocation arrangement (bottom left of Figure 4-57). Each approach has advantages and disadvantages.
The do-it-yourself approach requires that the agribusiness closely monitor its own website at its own premises. Depending on the access connection and whether the ISP offers an SLA (Service Level Agreement or guarantee of traffic capacity), the Internet connection purchased will not be shared with other customers of a hosting company. However, it will be shared with other customers using the ISP's backbone. One advantage (and additional responsibility) is that Internet traffic can be monitored for security and adequacy by agribusiness personnel without using middlemen.
However, the do-it-yourself approach also requires that the agribusiness be capable of calculating traffic adequately, monitoring security, hiring appropriate network staff, and providing fault tolerant equipment to prevent the site from going down during power outages and other events. Should the site become more popular than expected, the agribusiness may have to upgrade the web server or the Internet connection (or both) to handle the increased traffic. Should the agribusiness overestimate web traffic, it may be stuck with an enormous amount of unused capacity. Costs will include initial set-up and purchase of a web server, with the recurring cost of the Internet connection from the business to an ISP (or NSP) to reach the Internet. Unforeseen expenses include having to bring in outside help to correct software, hardware, and security problems. For agribusinesses in rural areas, there may not be sufficient infrastructure to adequately self-host.
For self-hosted businesses, there can be a variety of administrative details involved in making sure domain name(s) point to the correct IP address and are renewed in a timely manner. When it hosts a site itself, a hosting ISP is not providing security, troubleshooting, or environmental protection (such as lightning arrestors and backup generators). The agribusiness has to handle such details twenty-four hours a day, seven days a week. The agribusiness must defend the site from hackers, reboot the server if it goes offline in bad weather or crashes due to software or hardware problems. Most hosting companies offer logs and statistics to hosting customers at no charge. When sites are self-hosted, statistics and log programs may have to be purchased.
If the second option, hosting with an ISP (or hosting company) is chosen, there are additional considerations. Hosting fees typically are based upon traffic to the website either in hits or in bits per month downloaded. Another consideration is the network load of the hosting company. In many cases, attractive rates for hosting camouflage oversold or mis-managed ISP backbone connections. Overbooking of the ISP's backbone can make the agribusiness' site unavailable or take an interminable length of time to load during heavy traffic periods regardless of the viewer's connection speed. Often, hosting companies purchase their bandwidth from an intermediate ISP (rather than an NSP), meaning that each hop to the Internet is a potential network bottleneck. However, directly hosting the site at a tier 1 carrier can be prohibitively expensive.
It is always good advice to visit the hosting company personally to see the server that will host the site and physically examine the surroundings. Another issue concerns the availability of technical support personnel during non-business hours. If the site goes down at 6:00 p.m. EST Friday, that could be a disaster for a firm counting on weekend West Coast or Asian viewers if no ISP help staff will be available until Monday morning.
Many hosting companies allow customers to use software packages and the rights to specialized serves such as shopping carts, e-commerce programming, RealAudio or RealVideo servers, and other specialized equipment the firm would have to purchase itself if it self hosted. Multimedia issues are discussed in more detail in 4.9.10 while 4.9.6 covers application and operating system issues. These technical details may not be available from all ISPs or prices may differ dramatically among those that do offer them. By using a hosting company, rural agribusinesses can avoid slow or non-existent high-speed infrastructures and avoid having their website traffic compete with the firm's own Internet traffic.
A third option is collocation (shown on the right side of Figure 4-57). Collocation involves placing (collocating) a purchased or rented web server dedicated completely to the business at the ISP's premises. Collocated servers have the advantage that certain connection charges can be avoided and they may offer greater speed when directly connected to the ISP's Internet pipe. Security can be improved when the server is kept in a locked cage at the ISP premises, but it is important that business personnel be able to access it on weekends or evenings in case of problems. Under many collocation schemes, the agribusiness is responsible for making sure the server remains operational and not the ISP which is responsible only for a cage, a connection, and electricity. Thus, it is also important to understand the emergency power supply and weather event policy of the ISP that will host the collocated server.
An agribusiness can choose Internet connections and e-mail service from an ISP not involved in hosting their website. However, there can be pricing advantages to using a single source. Policies regarding the number of e-mail accounts, amount of e-mails, spamming, etc. can vary by ISP. Some ISPs have been blacklisted for spamming so that e-mails sent by the firm might not be accepted at many destinations.
Pricing of website hosting will vary by which of the three options is chosen and according to the hosting company or ISP. There may be fixed costs of installation and for web servers, recurring charges for disk space on an ISP or host server, and recurring fixed or traffic sensitive charges (or both) depending on the ISP, NSP, or hosting company. Table 4-35 gives a quick summary.
The first part of Table 4-35 (best read in connection with Figure 4-57) shows that the self-hosting agribusiness may face higher costs in hosting its own site. First, by taking on the responsibility of hosting, it incurs webserver costs, server installation, firewall costs, and programming expenses. Recurring programming services will be needed to maintain the operation of the site and to provide security.
Self-hosting requires a dedicated access connection (such as a T-1) to be obtained from a telco or other provider that is capable of carrying website traffic and Internet traffic from the agribusiness to the ISP POP. The agribusiness must pay an ISP to carry that traffic on to the Internet. If the agribusiness self-hosts, Internet access will include two kinds of traffic (website and agribusiness Internet) as it flows over the ISP backbone to the NAP to reach the Internet.
Some installation charges may apply and additional CPE may be required to support the dual traffic over the access connection that takes traffic from the agribusiness to the ISP's POP. The main advantage is greater control over the website and better security. There is a danger that web traffic could congest the access connection or the ISP's connection to the Internet so that both web traffic and ordinary agribusiness Internet traffic would be slowed. The ISP's flat-rate pricing for Internet access may be augmented by transfer charges for traffic above a certain number of megabytes per month when the agribusiness self-hosts.
If a hosting company hosts the website, fewer maintenance or IT personnel charges are incurred. With a hosted site, the agribusiness does not have to have an Internet access connection of any kind. The host's POP to Internet NAP connection is the main cost. The agribusiness is charged a recurring hosting fee in combination with a rate on the actual data transfers made using the capacity of the connection from the hosting company to the Internet. Installation charges would be the main advanced costs. If web traffic exceeds the expected level, a service upgrade might be necessary. Some hosting companies offer unlimited web hosting for a fixed monthly rate, but in order to offer low rates such hosts may overbook the circuit from host to Internet NAP. Other unlimited access plans may be overpriced according to the traffic likely to be generated by a small website. One hosting plan does not fit all agribusinesses.
Finally, co-location strategies save the agribusiness the expense of increasing bandwidth between the ISP and the business location. The ISP and agribusiness share the expenses of setting up a dedicated server (owned by the agribusiness but located at the ISP). This allows the agribusiness to save the cost of back-hauling web traffic to its own location and possibly overwhelming the access connection used by office staff. The agribusiness pays the ISP for the ISP to NAP connection used for its ordinary Internet traffic together with traffic for the website itself. It would also pay a telco or wireless firm for a dedicated access connection from the agribusiness to the ISP's POP to serve office Internet demand.
The actual cost of hosting must be balanced against the benefit of increased control and access connection costs. An important advantage of co-location or using a hosting company is that the agribusiness' own infrastructure (if poor) is not at issue. The choice of a hosting company, reservation of IP addresses and domain names are only two ingredients in an agribusiness' Internet strategy. Next, website design and maintenance are considered.
4.9.4 Website Design, Programming, and Maintenance
Website design, programming, and maintenance are covered together to underscore the philosophy that a website is not an end in itself but an ongoing process. Because an agribusiness' web site is built in the dynamic realm of virtual space, it is unlike a magazine advertisement or new office building though it can function as both. However, unlike a static magazine ad or completed construction project, work on a website is never finished. This subsection discusses examples of what a website can do, the steps towards creating or updating one, and some pitfalls in design, along with guidelines for maintaining a website.
The WWW is a multimedia vehicle. Hence, websites can contain text, graphics, product photos, video, audio, interactive demonstrations and tutorials, frequently asked questions (FAQs), links to e-mail and telephone numbers, chat rooms, guest books, e-mail and fax auto-responders, and many other messages. The design of a website is best thought of as an ongoing process. Sites must be tailored and updated regularly to suit changing customer and agribusiness needs.
Readers are already familiar with many examples of the potential of websites for agribusinesses and non-agricultural businesses alike. Here are several examples of how websites and Internet use fit agribusiness needs. The B2B market is an especially hot area where farmers may purchase feed, seed, and equipment online. One example is the @gshop service from Progressive Farmer magazine's agriculture.com. Farmers are able to purchase supplies through a bid-asked process that is free of middlemen. Another example are pricebots, price shopping services that hunt through the Internet to find the lowest prices for inputs worldwide and allow buyer and seller to interact. Price alarm and warning services dispatch e-mails or even execute limit orders on forward and futures markets giving farmers and marketing firms the ability to react automatically to price changes.
Weather and GIS (Geographic Information System) monitoring sites can alter farmers to upcoming severe weather or frosts, highlight areas where hail damage has occurred, and even provide scientific advice concerning when to irrigate and summarize field scouting reports. E-commerce applications (touched on in more detail in 4.9.9) use web sites to fill and track consumer orders, schedule sales appointments, answer product questions, and provide interactive technical support. Clients can pay bills online and obtain account statements as well.
Another example of how web sites can be useful includes helping agribusinesses to reposition themselves, such as helping cattle ranches to earn income from guest ranch and hunting services. Industry groups benefit from being able to offer PR information about agriculture and use web sites to gather marketing and legislative information so it can be instantly communicated to members. Employee recruitment sites and the spotlighting of business community involvement and charity projects are also done on web sites. Auctioning and bartering sites are becoming particularly important in high-end breeding operations and for marketing feeder cattle. Real-time video inspections of orders to be shipped, international market expansion, gathering of market information, better access to extension bulletins, and solicitation of investors round out the list.
In short, websites serve many other needs in addition to direct sales, advertising, and communication. Due to the many possible uses, it is important for the agribusiness to develop a website sequentially. According to MindSpring, there are five steps to be followed in web site design and start up [Mind Spring, 1999]. These steps include picking a site strategy, designing your site, setting up your site, maintaining your site, and marketing your site. Of the five, the focus of the next passages will be on the first four strategies. Site marketing, which consists of promotion, measurement, and management, is covered next in 4.9.5.
The first step, choosing a site strategy, includes selecting both quantitative and qualitative goals for the web site. Some qualitative goals include polishing the corporate or industry image, disseminating product information, boosting employee morale, and improving customer service for existing customers. A website can be used to strive towards quantitative goals such as to make new customers, obtain new international or national sales prospects, create a retail storefront or e-agribusiness, and even provide demonstration software distribution. The site can also be used to distribute sales catalogs, to survey customers, and even to facilitate anonymous complaints. It is crucial to prioritize objectives according to their importance and implement decisions quickly. By creating a digital information library about products, creating 24-hour virtual employees, gathering instant market research, and establishing a retail storefront, Mind Spring argues that the web will "make life easier for you and your customers" [MindSpring, 1999, p. 1].
The second step, web site design is another service that can be done in or out of house, depending on the needs of the agribusiness, staff abilities, and budget. One advantage of web sites is that they are relatively inexpensive forms of advertising and communication. Both advertising and communication are important because websites are a multi-faceted medium. First, the web is an advertising medium capable of sending customers and prospects advertising messages. Second, a web site functions as an interactive customer-centered medium to allow customers, suppliers, and prospective employees the ability to cheaply and instantly communicate with the agribusiness.
There are many web design pitfalls. Some have to do with the simple visual design of the page while others are related to more complicated programming issues for dynamic web sites. Before briefly discussing what to look for in a web designer and a web programmer, consider some common design and programming pitfalls. Then, it will be more apparent why the twin decisions of design and programming are often assigned to professionals outside the agribusiness.
Design also depends on the demographics of existing customers and new viewers. The diversity of viewers themselves and the diversity of their computers, video displays, browser versions, operating systems, and connections are extremely important. Some viewers may be dial-up customers with 28.8 modems running Windows 3.1, while others may be on a large corporate network using a Windows NT client and a T-3 connection. The audience may use non-IBM PC platforms such as Unix workstations or Apple Computers.
Graphics displays of viewer computers may range from 640x480 to 1600x1200 while color depths can range from 8 to 32 bits. Different web browsers from IE (Internet Explorer versions 3 to 5) to Netscape versions 2 to 4.7X (now also version 6) each perform somewhat differently. There can be dramatic differences in how well content displays if at all. Viewers from OSPs such as AOL or Prodigy, from wireless providers, or from WebTV frequently report they cannot see content or perform certain actions (such as launch forms or order). Foreign language viewers may have additional concerns, especially if their browsers are set to see content in a native alphabet that is significantly different from the Latin character set used by English.
While the idea of the Internet is to permit interconnection it is important to understand that a firm's web site will "look different" on one computer compared to another depending on graphics capabilities, operating systems, and browser versions. Furthermore, web page content will load slowly (or not at all) depending on the viewer's Internet connection. Even if both the viewer and the agribusiness' web site have high-speed connections, Internet congestion and routing hops can slow traffic to a crawl.
There can be requirements on the viewer's computer as well. If the web site collects orders or form data from customers using middleware, the viewer must have a compatible browser. Web sites with visual or audio content may require use of browser plug-ins such as Shockwave, Adobe PDF, or the Real Audio player. Viewers who do not have these programs will have to download them. Even though most plug-ins are free to download in an accounting sense, the economic costs can be significant in the time it takes to download them and install, especially for the sizable number of dial-up viewers.
Guidelines to make sites interesting enough to attract return visitors include informative and interactive content on an easily navigated site. Since presentation of a particular product is of necessity short on the index page viewers who want more detail should have the opportunity to link to explanatory text that provides more detail deeper in the site. Clean images (those at sixteen or thirty-two colors) take less bandwidth and make sites faster to load. Many web designers suggest that page requests (HTML files and images) be kept between 15kB and 40 kB so that AOL dial-up visitors and others with slow connections are able to view content. For example, viewers may not wait around over a minute to download a 250 kB photo. Recall that the file size on a directory is one-eighth the bits that will be transferred (250 KB x 8 = 2 Mb). Without taking into account error, overhead, congestion, and other factors, the fastest the download would occur is over thirty seconds. An increasing number of sites (such as weather.gov) offer viewers the choice of everything from simple text only to intricate multimedia content through selection links on the default page.
Setting up the site is the third of MindSpring's five steps. HTML design should be thoroughly tested with a variety of browser versions, operating systems, with AOL and WebTV, and on different graphics screens. Misspellings and non-working links should be taken care of as well before the site goes live. Even with a perfect HTML design, there are many web site programming pitfalls to avoid. Many agribusiness websites are already a mix of design and programming. As web site programming becomes increasingly important, it is important to consider some programming pitfalls to avoid.
Typically, programming problems surround the qualifications of the programmer, the use of web middleware, and the synergy with the characteristics of the visitor's machine. Middleware is defined as "a layer of software designed to sit between one system (usually a client) and another system (usually a server) and provides a way for those systems to exchange information or connect with one another even though they have different interfaces" [Sheldon, 1999, p. 622]. Originally, web sites were written in HTML only and were essentially static documents, solely with design HTML elements. As the Internet grew in importance, specialized programming was needed to allow web pages to come alive with attention-getting features such as real-time clocks, newswires, and stock tickers. As e-commerce skyrockets, viewers needed to be able to order online and access dynamic databases such as sales information, availability, and other kinds of information.
For agribusinesses, middleware allows web users access to select information that is on mainframes and data servers in the private data network of the agribusiness. The web server runs a middleware package that can communicate with the agribusiness private computers and the viewer's browser to allow Internet visitors access to certain kinds of useful information without compromising security of the private data network. The necessity for middleware varies according to the type of business.
For example, a nursery can use middleware to showcase the plants it has available using plant pictures, prices, and other characteristics as fields in a database. Retail or wholesale customers may search by common name, scientific name, flower color, sun preference, or any other characteristic. As new plant varieties become available, instead of creating a new illustrated web page describing each plant, a picture and some characteristics of the plant are entered on a database form. Then, the plant is added to the database and the plant's image (along with accompanying information) is automatically seen by site visitors. As pricing, colors, or the stock available is changed on the nursery's private data network, it is automatically changed on the web site. If necessary, the database can display plant pricing and information one way for retail customers and another way to the wholesale trade.
Programming such sites can quickly become complex and expensive. The HTML design is a simple boilerplate or template so the content results from what has been entered into the database and how well it displays. Because sites using middleware are complicated, there are more chances for things to go wrong. In a worst case scenario, middleware (if not fully tested) can prevent the viewer from seeing the information desired or cause the entire e-business operation to stop. Poorly written JAVA applets can cause a viewer's browser to crash. If the database is improperly constructed, the wrong price may be charged or credit card authorization glitches may prevent all orders from being accepted online.
Pricing of web design and programming services varies considerably. It is extremely difficult to quantify what a web site entails. According to netb2b.com, median prices from web design and programming firms for full site development vary as shown in Table 4-36. The prices quoted depend on a standardized definition of what a small, medium, and large site is.
Sources: Carmichael (1999a, 1999b, 2000).
The large dispersion of prices suggests at least two conclusions. First, some businesses are paying up to several hundred times what others do for what may appear to be roughly equivalent results. The buyer should be wary of possibly inflated pricing. However, at the same time, it can be difficult to judge whether results are equivalent because of the complexities involved in a working web site. The buyer should be wary of low pricing because it could mask a cookie cutter design, an untested site, or many other unpleasant possibilities.
However, there is no question from these results that agribusinesses should investigate at least five bidders for their web site project. It is noteworthy that pricing at web design and programming firms located in college towns is comparatively lower than for other areas. However, research into population density and web site pricing has concluded that there is insufficient evidence to support the hypothesis that "prices are substantially higher in large metropolitan areas than they are in small towns or rural areas" [Brenner, 1999]. Some projects are contracted to firms with programmers in Asia in an effort to save money.
A new site may be priced as a project (using total prices as above) or on an hourly basis. Nationally, hourly fees range from $75 an hour or less for basic HTML to over $200 per hour for middleware programming (database and Java/Shockwave) [Carmichael and Morrison, 2000]. Design shops that charge by the hour may be reticent to quote a total price to the agribusiness. Shops that quote a price may have escape clauses in the contract permitting substantial overrides. It is important to get everything down clearly in writing before work is started. As the project continues, expect to pay more if the site is added to by ideas from either end.
Because of the complexity of web design and programming most agribusinesses will use outside firms to create, program, and maintain websites. Even if personnel within the agribusiness are qualified, there is no guarantee that they will always remain on the payroll. However, it may make sense for agribusinesses that rely heavily on the Internet to keep as much of the programming function on the firm's payroll as possible due to security and copyright concerns. Web design and programming can be another thorny legal issue because some programmers and designers insist that they (rather than the business that pays for the web site project) own the code and design.
Selection of the right web designer or programmer is not always simple. Several guidelines should be kept in mind. First, there is no need to limit selections to designers or programmers in the immediate area since communications and work assignments may be done over the Internet. Second, it may be better to use a shop with several designers rather than a freelancer. However, a freelancer with a superior reputation may be able to charge less. Third, it is again stressed that estimates should be solicited from several designers or programmers. The range of prices shown in Table 4-36 is too great to allow this advice to go un-repeated. Developing a RFP (Request For Proposal) to be given to designers or programmers describing what is desired is a useful process that helps the agribusiness focus on what it hopes to accomplish from a web site.
Fourth, it is important to watch expenses carefully and remain involved as the work progresses. Full payment should be made only upon satisfactory and timely completion of the project. Many firms have faced unexpectedly slow design progress or have paid money without ever getting anything in return. It is extremely important to check working models of the site (or program) as work progresses using different browsers, computers, and connection speeds to see how well the system performs in action. There are many cases when the designer or programmer has insufficiently tested the site and the results have been catastrophic.
While agribusiness managers do not need to be Internet experts to use the technology, it is important to remember that communication is needed for sales to occur. Therefore, there are four communication-related characteristics for web sites that the agribusiness should insist upon. First, the site should have e-mail links to the agribusiness and those e-mails should be answered promptly. While the e-mail addresses used on the site will be harvested for addresses for spam lists, wading through junk e-mail is a small price to pay compared to not getting feedback and inquiries from viewers.
The second communication characteristic the site should have is to have contact information such as telephone and postal address on as many pages as possible. This can help assure customers that they are not dealing with a short-lived or backyard agribusiness as well as providing them with other avenues of contact. Non-Internet contact information also tells customers where to call and order if they are uncomfortable with ordering online or if they have questions the web site does not answer about the product. Furthermore, a working telephone number and postal address are useful to allow viewers to report if the web site is not working or to inquire about what happened to an order they never received. Web pages should also have e-mail links (to the agribusiness, not the programmer or designer) that viewers may use to report problems with the web site.
A third characteristic all sites should have is working links. Links on the site (and all content pages) should be re-tested regularly. Web sites may seem deceptively simple. While the freedom of possible actions a viewer may take is part of the allure of the WWW, when all the combinations and permutations are considered, the number of possible problems is staggering. Rather than having to be told about services and products the way the agribusiness chooses, customers can choose the order and level of detail based upon what information they want to know. However, the interactive approach can be unpredictable enough that the unexpected is likely to happen. A single misspelling of a link to a product or a failure to anticipate viewer responses to a form or shopping cart program can cause the customer to lose faith in the agribusiness. This is especially true if the ordering or information-getting processes do not work, take too long, or cause the viewer's computer to crash.
The fourth characteristic that agribusinesses should ensure includes a complete copy of the site and the right to use it even if the relationship with a designer, programmer, or ISP is severed. The copyright to the site, all hosting passwords, program code, and a full backup copy of the site (including HTML and graphics) should be obtained by the agribusiness. There have been cases where hosting servers have crashed without backups, completely erasing a site's contents forever. ISPs or hosting companies have also been known to erase sites of businesses that are delinquent in their accounts. If the agribusiness changes designers or programmers, it should not have to pay a new person or firm to reinvent the wheel.
Finally, during site set up and the early phases, agribusinesses should visit their site regularly. One way to achieve this is by suggesting that employees set it as the default page that their web browsers (at home and at work) automatically open to at the start of an Internet session. By taking this step, the agribusiness is likely to notice if the site does not come up at all on a particular day or if something is wrong. Browser plug-ins, middleware applications, links, and programming features should be monitored and tested regularly so the agribusiness gets the site it paid for. Indeed, frequently visiting the site during site setup sets the stage for the fourth step in the design process, maintenance.
Now that the first three steps of the web design process are outlined, it is time to introduce site maintenance. Maintenance is one of two steps that are never finished because even when a web site project is "completed", the agribusiness should not consider the site done. New content, additions, deletions, and regular maintenance are as important as initial design and testing because the web site is visible to viewers worldwide. Unless web sites are maintained regularly, content can become stale and outdated, creating a poor image of the agribusiness to viewers.
Often, the lack of maintenance does not result in anything more than embarrassment. However, not changing information can even be expensive as one poinsettia grower found in 1999 when customers demanded the lower 1997 Christmas special prices that had been left on its web site completed two years earlier. This kind of situation (where an agribusiness does not change the site at all after creation) obviously discourages people from checking the web site to see what is new with the company, defeating the purpose of cyber communications.
Storing and managing the voluminous material that can end up appearing on an agribusiness' website is the job of web maintenance (or web site management), another make or buy decision. Whether done exclusively in-house, exclusively by designers, or through a mix, web maintenance has both an art and a computer science perspective. The computer science perspective views a site as revisable, dynamic source data that requires considerable skill and troubleshooting effort to display flawlessly in fulfillment of the dynamic responsibilities placed on the web site, and not as final-form output. The art perspective views a site as a regular creative assignment with shifting technological and ever-expanding business objectives, and not as a completed work. These perspectives tend to annoy agribusiness managers (and their accountants) because both the designer and the programmer may sometimes seem to operate without limits and do not seem bound by any final deadline. Outside designers and programmers may do their best work if kept on a conditional retainer, where each month a set of tasks is to be accomplished for a particular sum before a particular date. That posture seems better than the open-ended retainer approach where both parties get around to maintenance when they feel like it.
If maintenance is done inside the firm, the web site must be managed in such a way that individual creativity and departmental initiative do not give way to total chaos or redundancy. A common approach (and one that can be very fruitful) is to have various departments and individuals have their own sections of the website while reporting to an in-house ombudsman. This tactic can get everyone involved in improving the site and making it useful to customers while keeping the look uniform and avoiding divisive themes. There may be no reason to hire an outsider simply to change prices, post monthly specials, or add e-commerce data to middleware.
However, according to Seybold, "treating web sites as persistent data stores introduces a new set of complexities in document management" [Seybold, 1996]. Some organizations allow many departments or individuals to add to the firm's web site. It is easy for each department to create and update its own web pages using many differing goals, a variety of creative and computer science approaches and different software packages. If too many different people or departments put their own mark on the website without a central policy and a single watchdog, the result can be redundant source files, incompatibilities, and other problems. It is hard to track file and graphics revisions that are made in at least two locations, by more than one person, with more than one objective. In-house maintenance is possible using numerous software packages that are available to help non-designers and non-programmers keep content current. However, some of these (Microsoft Front Page is an example) require special web hosting arrangements and may become unstable when used with other web editing software packages, even causing the site to become invisible.
One of the benefits of using the WWW is that it is possible to track and measure what works (and what does not) using a variety of measurement tools. This subject and the related topic of promoting the web site so that many visitors see the site and return to it frequently are covered next. However, visitors tend to favor well-designed, well-maintained sites so promotion efforts are most fruitful with a well-designed, well-maintained site.
4.9.5 Website Promotion and Measurement
A well-designed website (even if hosted by a well-managed ISP with plenty of bandwidth) is of little use if no one visits it. Experts suggest that the site be targeted towards existing customers as well as new prospects for maximum success. Web measurement and web promotion are important tools that help agribusinesses tell if their efforts are successful or not.
Web measurement is a unique feature of the Internet. Typically, visits to a web site are automatically (and precisely) recorded in a log file. Statistical reports are available free from the hosting company (or for the price of an inexpensive software package) to answer who visited the site, when, which parts, from where, how long, and how often. The main units available include gross visits, unduplicated or net visits (uniques), page requests, hits, and bytes. These units can be cross-tabulated by time-of-day, date, month, and a host of categories that give characteristics of e visitors and visits. It is also possible to tell the order a visitor goes through the site, how long is spent per page, and many other details of a visit.
A page request occurs whenever a particular HTML page is accessed, while a hit occurs whenever any kind of file is accessed. For example if an HTML page has three images, a total of four hits would be recorded every time that HTML page is accessed. It is more difficult to count visitors since the log can only identify visitors by IP address. For level four and higher Internet access, IP addresses are typically assigned to a particular machine. When a person sitting on that machine requests the same web page twice, it is counted as two gross visits but only one net visit. These kinds of connections are typically associated with office Internet viewing or high-speed residential access such as via cable modem or DSL. However, the majority of visitors to a web site cannot be identified with a particular machine since dynamic IP addresses are assigned them. Hence, data about visitors is categorical, but data about visits can be specific.
Nonetheless, visitors can be classified according to their country, operating system, browser version, and graphics display capability. In this way, the agribusiness is able to gauge who is visiting the web site and how well their content is seen. It is possible to identify and classify viewers by TLD (.gov, .com, .edu, etc.) and by common domain names such as aol.com. Log data can even be used to tell whether competitors are visiting a site, as was the case during the 2000 GOP Presidential race. When visits to Sen. John McCain's campaign web site from rival George W. Bush's domain were among the top 10 traffic generators, the McCain camp knew its web site was being heavily analyzed by the opposition. Logs were able to tell what parts of the site were attracting the most attention so that competitive responses (and a press release) were ready. Log analysis is especially useful for security purposes because it can show whether attempts are being made to access directories or files that are meant for subscribers, employees, or are otherwise un-linked to the main site. Another reason log analysis is important comes from its ability to measure web promotion, the next topic.
Web promotion refers to the promotion of the site to existing customers, search engines, newsgroups, link exchanges, and the formation of associations with receptive organizations. Table 4-37 highlights fifteen steps that can be used to promote a web site. Promotion is another make or buy decision for the agribusiness. However, whether the firm self-promotes or hires a web site promoter, following these steps can pay off. Depending on the nature of an agribusiness, some of these may be more important than others.
The first step has already been mentioned in 4.9.3 when domain names were discussed, but it bears repeating. Agribusinesses should choose at least one domain name that contains a description of what the business does, sells, or services. Recall that more than one domain name can be programmed to point to a particular web site. Many agribusinesses have already taken advantage of descriptive domain names for two specific promotion reasons. First, it is easier for customers to remember nursery.com and to type it in their browser's location field than baileynurseriesofminnesota.com. Secondly, if one of the keywords to be associated with the business is also the domain name, search engines are likely to rank the site higher.
Keywords are words that are likely to be associated with the business when customers try to search for it. More will be mentioned about keyword promotion in step four. However, keywords that could be used as domain names include categories such as those in classified directories, brand names the firm has the right to use, product benefits, geographic terms, or any characteristic in the public's mind. Even if the "good names" already seem to be taken, new kinds of TLD (Top Level Domains) and geographic names are available for registration as detailed in Table 4-34 in 4.9.3. It makes always makes sense to register the names the agribusiness does business under as domain names even when keyword domains are used. That way, someone who knows the business name can find the website.
Adapted from Fairchild (1998).
The second step in promotion is accomplished if the first step has been. However, the lure of a "free" web site may cause small agribusinesses to ignore step two. Ignoring step two also prevents others steps (especially seven and nine) from being taken. If the agribusiness uses a website hosted by a free hosting site such as geocities.com or hosts via a dial-up account under an ISP domain such as www.isp.net/~username, it is asking for trouble. Some search engines entirely miss such sites. Furthermore, the URL's are hard to remember. For example, sites on GeoCities may require up to twenty characters with several slashes in the right places. Free web hosting companies make money by selling advertising pop-up windows and banner ads that compete with the content of a hosted site.
The third step seems obvious: make the welcome (index.html) page exciting. However, the index page must be exciting both to human viewers and to search engine robots (or bots) which are spider programs that browse up to thousands of sites an hour. many viewers enjoy eye-catching graphics, traveling tickers, contests, short animation or graphics (provided they can be downloaded over a dial-up connection), contests, interesting backgrounds, etc. However, adding too many bells and whistles may cause the substance to get lost in the style, increase download times, force viewers to download plug-ins they don't want, annoy them with time-consuming commercials, or even cause their browser to crash.
Meanwhile, there is a tradeoff between content that humans find exciting and content that leads to better search engine rankings. Visually exciting frames, involved Java applets, and automatic database windows may be attractive to humans, but they are virtually invisible to search engines. Search engines look for the same keywords their customers do in web pages so that text seems to be most exciting to them. Promotion to search directories such as Yahoo! (where a human visits every site before it can be listed) can be difficult. However, these directories are among the most popular and making the site more attractive to human editors can be an important source of traffic. To fulfill requirements, it is necessary to have information-rich text using keywords in context, interesting links, and content that furthers knowledge in general. These tactics also encourage viewers to stay connected and bring them further into the site. A professional web promoter may be of real help with improving site rankings with the most popular engines (which can be responsible for over 90% of new visitors).
The fourth step is to have well thought-out titles, descriptions, and keywords (meta tags) on the index page and other pages within the site. Only the title will be visible to viewers and if they should decide to bookmark an agribusiness' page (in Netscape) or choose it as a favorite (in Internet Explorer) the title will be what they click on to return. Titles should be descriptive of page content with some keywords peppered in. However, titles with too many keywords or over a certain number of characters risk being seen as an attempt to spam search engines.
Search engine spamming (the deliberate attempt to trick viewers into visiting a site by listing unrelated keywords or too many of them) makes fruitful searches hard for search engine customers. Many engines are programmed to reject sites that spam and the names of the worst offenders may even be blacklisted among engines. Spamming is most likely to be attempted in the parts of an HTML document not seen by the viewer, the so-called meta-tags. Meta-tags typically include two fields on the web page, the description field and the keyword field. A few complete sentences of copy using the most important keywords (and describing what is on the site) are used in the description field.
The keyword field should contain at most fifty to one hundred keywords (single words or word combinations) that are most likely to be used in finding the site. The agribusiness or web promotion consultant should draw up a keyword list that contains specific descriptive terms along with common misspellings of keywords, geographic locations, brands, sizes, etc. The success of establishing keywords for an agribusiness site depends heavily on who else is trying to use the same keywords. The keyword "stud service" is likely to be far less effective than keywords such as "lychee" or "carambola" because there is less competition (within agriculture and outside of it) for use of the words.
Often, it may appear that using widely used terms such as orange juice are hopeless. However, modifiers and alternative forms such as "wholesale Florida orange juice concentrate" or specific trade terms such as "reconstituted Florida orange juice, Brix-acid ratio 13:1" could work as keywords or in meta-tag descriptions for a juice processor. Such keywords might be closer to the terminology a professional buyer would use instead of typing in "orange juice" alone and getting tens of thousands of results. Remember that it is necessary to consider the target searcher-viewer. If a site hopes to attract local retail customers, the keywords used will differ from those that would be used to attract technically oriented or wholesale buyers.
The next two steps require caution. Step five involves the posting of short, non-commercial sounding responses to topical discussion threads on Usenet newsgroups along with AOL and other commercial discussion groups. This involves posting a public e-mail to a discussion thread (the subject of an earlier posting) that will be seen by people interested in a particular subject enough that they have subscribed to a particular newsgroup. Caution is required. A blatantly commercial message will be seen as newsgroup spamming and may be ignored or become the subject of public flames (attacks against the poster in the newsgroup) or private flames where the e-mail address is flooded.
The agribusiness should only use this method of promoting its website if it can give advice or counsel to people who have put out threads looking for information in an area topical to the business. For example, subscribers to the newsgroup alt.agriculture.beef might welcome responses to specific questions about cattle breeding from a qualified breeding operation, even if the posting included a link to the firm's web address, etc. However, members of the newsgroup soc.tahitian.dancing (if there were any) might resent an invitation to visit the breeding operation's website, especially if the invitation was also sent to 2,000 other newsgroups. Newsgroup postings can be an excellent way for an agribusiness to get its name on the cyber map, but someone in the firm must read the newsgroup, compose the messages, etc. Furthermore, any e-mail address used to post on a newsgroup can be captured by spam list harvesting programs and may end up getting spam e-mail.
Cyber press releases can be sent out to promote sites through special automated services. This promotion option may be useful for agribusinesses when they open a new site, go online with e-commerce, or have some other newsworthy item to report about their site. It is important not to overuse this option since the trade press can only use items that have some news value.
The seventh step is closely related to the first and the fourth: build the web site and top keywords into sub-URLs. ADM has some domain names that would serve as examples. Sites such as admfeed.com, and admgrain.com, could be separately promoted and routed to the main web site admworld.com. However, an agribusiness does not need ADM's budget to use this promotion policy. Domain names are less than $50 per year and often can often be pointed to the index page of a website on another already hosted domain for no charge or a minimal set up fee by the ISP. The traffic that results can be substantial. Another related tactic is to promote different pages under the same domain. For example, a cattle breeder could promote its index.html page, its angus.html page, and its hereford.html page separately to the same search engines. Traffic would enter the website at the promoted page, but if pages were designed appropriately, there is no reason the index.html page has to be the main page. The multiple page approach does not work with all search engines.
Step eight, having an outside link page and cultivating link exchanges with trade groups, educational, government, and other commercial sites can be extremely important to building traffic. An agribusiness in Lee county might be able to get listed on Lee County Chamber of Commerce or other association sites, on the sites of organizations of which it is a member, and even on a local government site. Government and educational institutions can sometimes link to commercial sites if the site has great informational and educational value or has its own link page that lists sites in a particular industry or science. Vendors, neighboring businesses, and sites owned by people with personal relationships to the agribusiness may link to the site, especially when a reciprocal link is provided. This option can also be time consuming and require much e-mail to get link exchanges going.
Step nine concerns the logistics of registering with search engines. Generally, it makes sense to register (and re-register every few months) with as many search engines as possible. Foreign and specialty engines should be included where applicable. Re-registration can be important because search engine listings grow stale once competitors for particular keywords (in the industry or out) register keywords. Other sites may become ranked ahead of an agribusiness that registers first and fails to promote on an ongoing basis. The main goal of search engine registration is for the site to rank near the top when the keywords are used by searchers. Many customers will only visit the first few sites on a search list that can contain hundreds or thousands of names.
Search engine registration is not the only reason that step ten (avoid constantly changing page names) is included. If viewers have bookmarked a particular web page on a site, they may not spend time hunting for it if a "file not found" error greets them on their next visit because someone decided to change page names or directories. If changes in domain names or directories are important, the old URL should redirect viewers to the new one. Search engines have built in lags and even if the structure of a web site has been changed, the new structure may not show up for months (if at all).
Step eleven might be the most obvious. One often-forgotten purpose of an existing web site is to serve existing customers. Hence, an excellent way to promote a site is to tie it in with existing marketing efforts. The site can be mentioned along with e-mail addresses on all company literature, advertisements, Yellow Page ads, business cards, sales material, delivery trucks, in-store signage, packaging, and other collateral material. For businesses with automatic telephone answering (during or after office hours), it is a good idea to mention the web address if the answers to many common questions (hours, prices, product availability, specials, etc.) are available online. This helps customers find information when the business is closed or telephones are busy, especially if they have routine questions about hours, location, or acceptance of charge cards.
The twelfth step is advice on content. The Internet is not a purely advertising medium; it is also a communications medium. Since viewers are looking for information, entertainment, education and advice they may shy away from sites that amount to pure sales pitches. Search engines know that viewers prefer sites with the keywords they searched. Furthermore, commercial sites can inform about what a product does, how it is made, why it helps customers how much it costs, what terms of delivery and credit are, how it can be used, who else uses it, etc. An excellent feature of HTML is that content can be layered in successively greater levels of detail when and if an individual viewer seeks detail. For instance, detailed engineering blueprints or owner's manuals are usually several clicks inside a website since such technical details download slowly and are not sought by everyone.
Pricing information on the site can be a cagey issue, especially for wholesalers or retailer-wholesalers. If the agribusiness is eager to promote low prices, special discounts, easy credit terms, or other features it can do so within page titles and keywords. However, the format of a display or classified newspaper ad may not work unless used as a link to web specials, etc. Wholesale firms may decide to layer their site so that price information is available to subscribers with passwords. Whether password protected or not, special codes can be used so that search engines do not index pages with proprietary information or trade pricing. However, if retail or wholesale customers are "price shoppers" the agribusiness may prefer to promote price-related content.
The thirteenth step can be particularly useful to promote the site to small town sites or to specialized online communities. Featuring employee photos, spotlighting news and accomplishments around the firm, and using the site to promote employees as well as the company is good for morale and good for business because it converts employees into web promoters as well. Employees become more familiar with the web site when their pictures are displayed (individually or as a group). Furthermore, employees may promote the site by pointing it out to friends, neighbors, relatives, and other potential customers. Some firms even pay commissions to employees whose promotion efforts result in sales. Such programs require guidelines to avoid abuse and cross-purposes.
Next, if international viewers and customers are desired, go out of the way to welcome and cultivate them. There are many ways of doing this, beginning with promotion to foreign search engines. Free web page translations can be obtained on the Internet so that single welcome pages in foreign languages (such as italiano.html, espanol.html, and francais.html) may be put on the site in these languages. The purpose of these welcome pages is only to greet visitors in their own language and as the page to be promoted to search engines in that language, not to translate the entire site. From the foreign language welcome page, international visitors click to pages they are interested in (written in English), that may be translated using special links. The agribusiness has to be careful since automatic translation may offend rather than communicate. It is best to have someone who understands the language read the page and make sure the translation is accurate. Some agribusinesses with a shortage of multilingual employees have found that translating an entire site or parts of it into another language can pay for itself. While many customers may be able to order and carry out business in English, they may purchase more often and more frequently if web pages in their native tongue are available to help them get information about products and services.
Even without special web pages targeted to foreign prospects, many agribusinesses have become importers and exporters without having intended to. It is likely that most foreign orders will be for hard goods, non-perishable foods, or services rather than perishable products. Shipping charges and arrangements, duties, taxes, export regulations, and import laws in the country where the customer is located can vary dramatically. In addition to finding out about shipping charges, it is important to check with customs brokers and foreign consulates about rules specific to the country to be exported to. More information is available from freight forwarders or the federal government through the Department of Commerce's site, www.ita.doc.gov or the Export Control Administration's web site www.bxa.doc.gov.
The last step shown in Table 4-37 encourages the use of measurement tools (mentioned earlier in this section) to see what works. If the tools are used, visits to a particular page or referrals from a link or search engine listing can be measured to observe which step or steps are bringing the most traffic. Promotion efforts can then be tailored accordingly.
The job of web promotion can become a complicated one, particularly if the agribusiness places great hope on using a web site to attract new business. Web promotion firms and freelancers are available for hourly retainers or for project-based fees to do some or all of the work. If an outside promoter is hired, work has to be coordinated with the individuals responsible for web design and programming because web promotion involves changing meta tags and titles in individual web pages.
4.9.6 OS & IP Applications and Services
Like other computers, web hosts and Internet servers located at agribusinesses have operating systems as well. Depending on the operating system and the web server software that operates on top of the OS, agribusinesses may be able to use specific applications or TCP/IP utilities as part of their Internet access, web hosting service, or both. Recall from Figure 4-57 (in 4.9.3) that the agribusiness may separate Internet access from web site hosting. Hence, OS and IP applications can apply to either the web site equipment or the firm's own Internet access equipment or to both. Such equipment may be owned by the agribusiness and located at its own premises, owned by the agribusiness and co-located at an ISP or NSP premises, or owned by the ISP and shared with up to thousands of other users. Therefore, it is difficult to be specific because much depends on the particular design of network architecture. Beyond the access connection itself, there are three variables factors that affect what is available on an Internet server (whether a web server, an office host, or a combination of the two).
Server hardware is the first variable. Servers are network computers that provide services to users (the topic of this sub-section). The type of server is one important indicator of the services that can be provided. The web server can be a microcomputer such as a Pentium-powered IBM-compatible multiprocessor server, a mainframe or minicomputer server, or a superserver [Sheldon, 1999]. Pentium-based servers are the least expensive, but also are able to handle fewer simultaneous instructions from users.
Software, including the Operating System (OS) and the server software that runs on top of the OS is the next variable. While all computers use TCP/IP to communicate over the Internet, an OS (operating system) designed to work with specific hardware is also necessary. Common network OSs include Windows NT, Windows 2000, and UNIX-based OSs such as Linux, Digital UNIX, Solaris, and FreeBSD. Specialized server software sits on top of the OS as well. For example, Apache runs on top of UNIX, while Microsoft Internet Information Server IIS) runs on NT or Windows 2000.
Ironically, the more expensive the server, the less expensive server software is likely to be. This is since PC server software tends to be proprietary and sold in separate modules, while mainframe software may be open-source and available for less than $100. The higher-level languages require specialized programming that can be precisely tailored to the agribusiness' needs. While out-of-the-box software needs far less programming, it can be more time-consuming to maintain and fail to offer desired features. For example, inexpensive Pentium servers may use Windows NT as networking software, but require IIS for specialized Internet applications and the purchase of Windows Exchange Server for E-mail. More expensive mainframe computers may be able to use a single non-commercial higher-level language to support all TCP/IP applications and utilities without buying specialized modules.
The cost of operation can be a wash when the two are compared. Programmers that write high-level UNIX code tend to be well paid, while proprietary software tends to have reliability problems such as the BSOD (Blue Screen of Death) that Microsoft Windows and NT users and network administrators are intimately familiar with. Once the higher-level programming code has been written, UNIX-based systems tend to be reliable so that costs are mainly fixed, based on programmer time spent initializing and debugging the system. With many kinds of proprietary software, there is a high cost for the initial software purchase. This direct cost is often followed by even higher costs for network administrators and IT staff to fix mysterious glitches, add new software packages without affecting the network negatively, and respond to reoccurring crises [Kirch, 2000].
While all the server software and operating systems mentioned support some TCP/IP applications, there can be enormous variations in what is available to a particular agribusiness based on OS, server software, security concerns, and the policies of their ISP. Table 4-38 shows common and less common applications in the IP suite. The most commonly supported applications (by ISPs, OS, and server software) are at the top of the table, with less commonly supported ones below them.
The first three items are generally available to agribusinesses with Internet access. The WWW and DNS are closely related. If the primary DNS goes down, a secondary DNS is usually available. DNS problems can result in the inability of all web sites to come up in a WWW session, the inability of some sites, or for the agribusiness' own site coming up. FTP access to an agribusiness' site may incur additional charges or be impossible for web sites hosted by a hosting company.
The next three entries are related to e-mail. E-mail options generally include SMTP for sending e-mail and POP3 for receiving it. Some ISPs and server software also supports IMAP as an e-mail protocol. IMAP is ideal for users who move around from machine to machine because the mail is kept on a stationary server. Mail that is received may be kept on a POP server if the user chooses that as an option. However, many ISPs and corporate servers discourage that practice because users leave all their e-mail on the server, causing disk space to be taken up. A copy of mail that is sent stays on the sending machine with POP so machines in the home office do not have archives of what was sent out on a home or branch office machine. IMAP also allows users to share mailboxes, making collaborative e-mail possible.
NNTP is the protocol that supports news servers for USENET mail discussion groups (discussed in 4.9.5). Because USENET groups require transfer of an enormous amount of information, some ISPs and corporate networks do not offer news as a service at all, or it may come at a premium price.
Ping, traceroute, and whois are command line executions that help users to see if a particular machine is reachable from another, troubleshoot connection problems, and find out information about a particular domain name's owner. These three are technically applications and not protocols, but may not be available with certain OSs, difficult to run (as in Windows where users must find a DOS prompt), or forbidden under certain security conditions. However, outgoing whois, ping, and traceroute utilities may be accessed via special webpages if they are unavailable.
SNMP collects information on all devices on a network for management purposes. Depending on hosting and Internet access options used by the agribusiness this valuable tool may be unavailable according to ISP policy. IRC (Internet Relay Chat) is a method where users can chat on special channels (topical areas similar to newsgroup subject lists). IRC may be unavailable through some ISPs or with some OSs due to security concerns. Typically, IRC is not a business application and one-on-one messaging services such as AOL Instant Messenger are more powerful, more secure, and free.
SLIP and PPP are physical and data link layer protocols used to allow machines to connect to a network as a node via modem or to connect nodes or routers. Depending on equipment, ISP hosting software, and OS these connection protocols may or may not be available. Since transporting IP packets is an OSI network layer activity, SLIP and PPP are ways of allowing remote nodes higher connectivity. PPP provides a more robust link than SLIP, as well as supporting other protocols such as IPX (Internet Packet Exchange), a proprietary Novell product. PPP Multi-link allows channel bonding or the ability to connect multiple links between systems on demand. Recall from the discussion in 4.9.3 that DHCP servers, proxy servers, and Internet access levels may be set up in a particular way by the ISP or server OS.
Continuing with the listings in Table 4-38, the access log is the next entry. The access log on a web server is saved in various ways (if at all) depending on the OS or web server software. If measurement and site statistics are important, it is important to find out beforehand how a hosting company, ISP, or web server manufacturer sets up the access log of viewers to the web site. Shell access refers to the ability of users to access the UNIX shell or other OS shell or operating system prompt. Users with shell access can run programs and scripts on web or network servers without using their local machine's processing capacity or software. Shell access is not allowed by many ISPs because it can represent a security threat, but if it is offered (possibly for a premium), the agribusiness would have the computing power and program library of a very powerful machine available. Finally, LDAP access refers to a specialized directory services specification of the IETF that allows an organization to have a "white-pages" style e-mail and telephone directory and provide roaming access to e-mail.
Table 4-38 is by no means an exhaustive listing of the Internet Protocol suite and related applications. Availability of each item can depend on how (and from whom) the firm obtains Internet access and web site hosting. If both functions are at the agribusiness premises (level four or higher Internet access and self-hosted website) the agribusiness will have the responsibility and control over some of the lesser-known parts of the IP protocol suite. However, the usefulness of IP to an organization does not end with OS applications and Internet utilities. It is now possible for an agribusiness to avoid the cost of expensive dedicated connections, complicated "value-added" data networking services, and expensive enhanced telecommunication services to use the Internet as the carrier for its own secure private data network.
4.9.7 VPNs and Convergent Applications
Virtual Private Networks (or VPNs) can be used by the agribusiness as a low cost and secure way to connect offices, home offices, traveling salespeople, and dealers or vendors in a unified voice and data network. Some VPNs are mainly private data networks that use the Internet as an inexpensive way to link computers together for fixed, nomadic, and mobile users. Other VPNs are true hypercommunication networks that offer full voice and data networking links among users of a firm's network, along with allowing communication with PSTN or Internet users off the network.
VPN is a confused term. However, as Sheldon says, "the confusion has more to do with what to call VPNs as they evolve into new networking technologies" [Sheldon, 1999, p. 1051]. Dramatic decreases in costs provide the incentive for appropriately sized agribusinesses to learn more about VPNs. For example, Data Communications magazine found in 1997 that annual communications costs for a three-city leased-lines network (dedicated circuits) were $133 thousand, $90 thousand for a frame relay VPN, and $38 thousand for an Internet VPN. First-year costs (including installation and specialized VPN encrypting devices) were $136 thousand, $111 thousand, and $54 thousand respectively [Cray, 1997, p.49]. For much larger firms (over 4,000 remote users), with annual data network costs of $6.2 million, Ascend found that a data VPN could reduce expenditures to $3.4 million [Ascend, 1998]. However, savings associated with VPNs can be obliterated by the costs of deploying software to SOHO sites, configuring those sites, and other hidden up-front and operational costs [Salamone, 1999].
Figures 4-58 through 4-60 shows several stages of VPNs. Figure 4-58 shows the pre-VPN communications configuration for a Florida agribusiness with a headquarters location in Homestead, three in-state branch locations, along with SOHO and traveling users. SOHO (Small Office Home Office) users may include executives, managers, and key IT personnel who work frequently (or mainly) from home, along with salespeople and other users who frequently travel nationally or internationally. These users currently use long-distance voice POTS lines to dial-into the firm's private data network at HQ or a branch office. SOHO users may enter orders, check e-mail, use the intranet, or use database or other applications on the company network that have been judged as too much of a security risk to do over Internet connections [Koehler et al., 1998].
Many agribusinesses may have dealers, wholesalers, or jobbers who sell their product and use dial-up access to communicate data with the agribusiness. In other cases, the agribusiness needs to use dial-up access to communicate with vendors to reach a vendor's secure private data networks or provide vendor access to the agribusiness' secure data network. One example is the Federal Express system where clients get a computer (owned by FedEx) and use 800 or long-distance dial-up connections (often charged by the minute) to check the status of shipments, etc. Many large retailers (home stores, department stores, and grocery chains) have similar systems their agribusiness suppliers must use in order to be paid promptly or receive discounts.
A VPN such as the one shown in Figure 4-59 gets rid of expensive dial-up long-distance connections between SOHO or remote users and the home office without sacrificing security. Note that such users have gained Internet service (used to support the data VPN) in Figure 4-59. Whether SOHO users get dial-up or dedicated Internet access (and the capacity of the connection) will depend on a cost-benefit analysis. The dealer-vendors have also gained Internet connections, so the agribusiness' Extranet for its own customers and vendors will be properly linked or so that customer or vendor Extranets may include the agribusiness.
Note that Figure 4-59 shows that (in addition to shedding the expensive long-distance dial-up connections), extremely expensive dedicated data circuits have also become unnecessary. In addition, the long-distance dedicated voice circuit (to the long-distance carrier's POP) has been replaced by IP telephony off the ISP POP. Making these changes can result in enormous savings. The dedicated data circuits among offices are particularly expensive for the agribusiness shown since they cross LATA (Figures 4-37 and 4-38) lines.
A data VPN requires the agribusiness use the special edge devices called authentication servers. Authentication servers, (together with other equipment) encrypt and packetize data and set up secure tunnels (virtual circuits). These virtual circuits replace expensive dedicated point-to-point or circuit-switched links (three in Figure 4-58) with private Internet connections. A greater capacity Internet connection will likely be needed between HQ and the ISP POP (shown as a bold arrow). Additionally, some of the branch offices (all shown with dedicated Internet connections) may need to establish greater capacity Internet access connections. Only a detailed analysis of bills and expected charges can answer whether changes should be made.
Depending on the devices used the migration from pre-VPN communications to a data VPN may not be noticed by users. Indeed, SOHO dial-up users will find it easier to use the new system (such as Figure 4-60) if they have always-on Internet access connections such as DSL. However, there can be important QOS issues such as reliability that put a damper on a complete migration from three separate connections (PSTN voice access, private data network access, and Internet access) into a single access connection. Before getting to the possible savings by migrating to voice, IP in addition to a data VPN (Internet, Intranet, and Extranet), it is important to touch on technical issues related to IP telephony.
The Internet can be used to transmit and receive packetized, digitized speech over its mixed public-private infrastructure. This is generally known as VOIP (Voice over IP). So far, the quality of sound and the latency and jitter inherent in using the Internet as a telephone access and transport network prevent its widespread use in business. However, as standards improve and prices fall, the Internet is becoming increasingly attractive at least for certain levels of voice communication. For example, dialpad.com (a free VOIP service) signed up over six million users for its free long-distance telephone service over the Internet during the first quarter of 2000. While the quality is not equal to the PSTN, it does approach the sound quality of some digital services. It is a free service since the firm gets revenue from banner ads that are displayed while users converse from computer to telephone.
Table 4-39 shows the main technologies used to provide telephone and enhanced telecommunications services via VPNs depending on the kind of Internet access connection and transport level virtual circuits used by the firm. Depending on application and CPE sophistication, the VPN can eliminate per call long-distance charges entirely, forward a salesperson's calls to their home computer and cell telephone automatically, and allow a home-based executive every communications service available at the home office.
VOFR requires a frame relay connection to access the voice network, typically over copper. ATM uses an ATM network to access the voice network or to transport telephone calls, usually over fiber optic cable. VOIP can use any type of Internet connection to carry calls. While all three technologies may be used in VPNs, it is important to note that frame relay and ATM may be used as private dedicated connections or for Internet access. The difference between a private VPN (that uses virtual circuits over a carrier network such as the frame relay cloud) and an IP VPN (that uses the Internet) are not given in detail. Voice and data traffic can travel over both types of VPN. However, QOS and reliability are more easily assured over carrier networks than through the Internet. Prices for carrier systems are higher as well.
AT&T lists five categories of VOIP, shown in Table 4-40 [Tower, 1999, Ch.9, p. 17]. The first is PC to PC VOIP. Specially equipped PCs using voice communications software can "call" other computers over the Internet and have near real-time or real-time conversations. Each PC must have a microphone and speaker (or better yet special earphone-mike headsets) as well as a sound card, compatible voice communication software, and Internet access. Users at each end must be online at the same time to make or receive calls, a requirement that makes it difficult for dial-up users to get calls unless the time is arranged in advance.
Next are PC to PSTN VOIP calls. These require compatible software and peripherals on the computer end as well as an Internet connection. To connect the Internet to the PSTN, the system must use IP over the Internet to connect to a CO on the other end. VOIP connections must be paid for on a wholesale basis and resold to the agribusiness, or it is possible that agribusinesses may use free calling using a service that underwrites Internet to PSTN connections through banner advertising.
One kind of single access connection (usually the least expensive) is a fully converged IP VPN as shown in Figure 4-60. The VPN shown uses PSTN telephone to PSTN telephone VOIP and a fully converged architecture. Here all local and long-distance calls travel as IP packets over access level and transport level via Internet connections (or a carrier's private IP network). Such a service is not ready currently, but some agribusinesses (those with SONET, T-3, or ATM service availability) are expected to be able to get this service.
Figure 4-60 differs from Figure 4-59 in several ways. First, PSTN voice lines are replaced by Internet access since local and long-distance calling at all branches is done through the Internet connection. A single dedicated voice circuit for local PSTN calls is kept at the headquarters location. Under this configuration, a local call (no matter where a user is located) is a call that is defined as local in Homestead. For some providers, that would include the entire Southeast LATA so that calls from Homestead to Sebastian Inlet would be local. The Seminole County branch and South Highland branches would use long-distance circuits to reach what would normally be classified as local calls in their locations but use the Homestead PSTN link for calls in the Southeast LATA. Incoming calls throughout the entire company would go through LEC facilities in Homestead and travel through the Internet VPN to reach their destination. In theory, these changes would not be noticed by users, but in practice IP telephony is not yet a business class service capable of replacing LEC service from the point of view of most agribusiness managers. QOS problems are simply too large. However, technology is developing quickly in these areas.
Another way Figure 4-60 differs from 4-59 is that all PSTN calls for SOHO users are made over the Internet connection. Since voice traffic goes through the Internet connection, it is possible that a greater capacity Internet access connection will have to be purchased. While every branch office in Figure 4-60 has but one communications link to the outside world, the Internet connection, the agribusiness will write two checks in most cases. Most wireline Internet access requires a separate charge for the agribusiness to ISP POP and for the ISP to Internet NAP. One advantage of new fixed wireless providers is that only one charge is necessary.
Premises to premises VOIP does not replace local telephone service, but may also be used in situations such as Figure 4-60 when all long-distance and interoffice calls travel over the VPN. Under this method, inter-office calls (within the agribusiness) do not incur toll charges or travel over the PSTN. Long-distance calls travel over the Internet to a long-distance provider (PSTN LD in Figure 4-60). From that provider they travel over the PSTN to reach a regular telephone. The definition of local calls may vary (as mentioned earlier) depending on the carrier.
The last entry in Table 4-40 is premises to network VOIP, which replaces access level connections (such as ISDN-PRI or dedicated T-1s) from the agribusiness to a LEC POP or an ILEC POP. Instead of being circuit-switched from the agribusiness through the local telephone companies CO facilities, telephone calls travel over the Internet access route. Premises to network services are likely to be divided between local telephone service and long-distance telephone service, traveling over the PSTN from the provider's POP.
The decision to migrate from three separate networks to a data VPN with voice on the side or an Internet-only VPN must be made based on costs, QOS considerations, available technology, and the type of firm. Moving from separate enhanced telecommunications (PSTN networks), private data networking, and Internet connections (as shown in Figure 4-58) to a VPN (as shown in Figures 4-59 and 4-60) requires that agribusinesses perform a cost-benefit analysis. When costs are restricted to the recurring costs of circuits alone, it is difficult not to adopt a VPN.
4.9.8 Security, Privacy, and Use Policies
Security was already discussed in the context of QOS and in 4.5.1 under protocols and standards. However, the Internet has its own set of security and privacy issues. This short discussion builds on the previous discussion of security as a dimension of QOS in Table 4-3 in 4.2.3.
Internet security is an ongoing concern that includes more than just security hardware or software. Current and future security hardware, software, consultants, programmers, and authorization scripts are additional elements that may be needed. A continued stream of human resources, new technology, and software upgrades are likely to be needed to keep the agribusiness securely connected to the Internet. Security precautions are something like insurance. For most agribusinesses, nothing serious will happen, especially if the Internet is not used to its full power.
The specifics of security are so complicated that all this discussion can do is identify the actors involved in six distinct areas of each agribusiness' Internet security drama. One of the worst mistakes that can be made is to concentrate only on publicized threats such as those posed by hackers and crackers while ignoring more likely threats posed by employees, customers, and competitors.
The motivations of some outside groups (hackers, crackers, samurai, and vandals) reveal the subtle variations in risk involved. Hackers argue that they are often incorrectly described as "malicious meddlers who try to discover sensitive information by poking around. Hence 'password hacker', 'network hacker'. The correct term for this sense is cracker" [The New Hacker's Dictionary, 2000, p. #hacker]. Hackers define themselves as people who:
enjoy exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary. 2. One who programs enthusiastically (even obsessively) or who enjoys programming rather than just theorizing about programming. [The New Hacker's Dictionary, 2000, #hacker]
However, such individuals would be harmless except that hackers subscribe to a hacker's ethic, which consists of:
1. The belief that information-sharing is a powerful positive good, and that it is an ethical duty of hackers to share their expertise by writing open-source and facilitating access to information and to computing resources wherever possible. 2. The belief that system-cracking for fun and exploration is ethically OK as long as the cracker commits no theft, vandalism, or breach of confidentiality. [The New Hacker's Dictionary, 2000, #hackerethic]
Hackers feel that exploring a system without permission is philosophically allowable, though they draw the line at outright criminal activity or evil intent. Crackers and vandals are the outside actors that can typically do the most damage to business. Both have malicious (even criminal) intent and both will target a particular web site or business connected to the Internet for the sheer enjoyment of inflicting damage, making a profit, or causing embarrassment. Some crackers and vandals also attack based on political motives such as to further radical environmental or animal rights causes. Samurai are crackers and vandals who hire out their services to help businesses fortify their Internet security.
However, crackers and vandals may be encountered less frequently and do less damage than several groups of other actors such as customers, employees, vendors, and agents. Many insiders may have no ill intent whatsoever, but be capable of exposing the firm to more risk than an outsider. Especially dangerous can be ex-employees or agents (such as web designers, programmers, or other contract personnel) with a specific agenda.
From the point of view of agribusinesses, the distinction among these groups may seem unnecessary. However, understanding the possible threats helps the agribusiness understand the protective measures that are necessary. Some sites, such as those for larger firms or those that operate in politically sensitive environments are more at risk than the typical agribusiness. However, other individuals such as customers, vendors, shareholders, and employees are also potential victims of crackers.
Six security, privacy, and use areas are summarized in Table 4-41 that provide a general overview of Internet security for agribusinesses. The possible dangers posed in each area (and the specifics of each) are too involved to list here. Furthermore, the DTE, DCE, OS, web software, and the staff combine to create a unique security posture for each agribusiness. Therefore, these six areas are a broad description of an area that every agribusiness with an Internet presence must be prepared to deal with.
The first area of Internet security concerns the self-hosted or collocated website. Responsibility for this area is solely the agribusiness' unless it hires an outside security firm as a consultant or manager. A few threats that can occur to self-hosted or collocated web sites include denial of service attacks, site theft, hacking, cracking, and vandalism. Danger here is to material on the web site or rented server space alone.
Denial of service attacks are concerted efforts by an individual or group to "flood" the web site with traffic, making it unreachable to visitors. Site theft occurs when DNS entries are illicitly changed so that the agribusiness' domain name points to an unauthorized IP address, or the contents of a web site disappear to be replaced by unauthorized content. Vandalism occurs when a web site is broken into and portions are erased and replaced.
If the agribusiness monitors its web site regularly, such problems can be noticed quickly. Site theft or vandalism are typically resolved quickly with the help of the ISP or NSP that hosts the DNS. However, site theft and vandalism, while small risks, are another reason a full backup of the site is essential. Denial of service attacks are rare except for larger sites, where federal and state authorities can typically step in and put an end to them.
If the website is hosted by a hosting company, the same security concerns just mentioned still apply. In that case, the hosting firm has more responsibility than the agribusiness. However, the agribusiness may have no control or knowledge about the security policy of the ISP. When shopping for a hosting company, it is a good idea to inquire about security policies and programs if the agribusiness plans to include sensitive material on the site. If a disgruntled employee of the hosting company wants to get even, it may be at the expense of an innocent client's web site. Constant monitoring of the site can help nip problems in the bud.
The third area concerns the protection of sensitive content, preventing unauthorized access to subscriber material, prices, or wholesale-only parts of site. Hackers may feel they have the right to browse any information sold to subscribers from the site and share it with others out of a misguided desire to make information on the Internet free. Crackers may offer to sell information to competitors or sell it themselves on the Internet, enter false orders, or otherwise wreak havoc. Many of these risks apply only to agribusinesses with B2B e-commerce sites or subscriber material. Remedies include properly programming and testing the site beforehand, and regular review of password and activity logs. Customers themselves may download material and attempt to deal with other suppliers using prices or to gain discounts if they find evidence that others are getting better deals. Employees may accidentally put material on the web that should not be there. Again, danger in this area are related to material on the web site itself, but specifically material intended not to be widely seen.
When the web site is hosted elsewhere, the responsibility for protecting restricted parts of the site may be shared among the hosting firm, writers of programming code (such as authentication software), web designers, and the agribusiness. Dangers include protecting sensitive content from competitors or hackers and the prevention of access to subscriber material. For some agribusinesses, access to prices or wholesale-only parts of site could harm relationships with existing customers or prevent new customers from buying. Passwords used by site designers, programmers, or former customers or employees should be revoked once they no longer have access rights. It may be difficult if not impossible to learn of some violations, so be careful of what is put on the company website.
Another security area is the security of self-hosted Internet domain and IP's. Agribusinesses with only a web presence and lower level Internet access will not have to worry about this issue much, but those who have computers at the business that can directly connect to the Internet do. At stake is protection of all machines connected to Internet including e-mail hosts and databases, the operation of firewalls and proxy servers, regular updating of anti-virus software, and patches for e-mail and browser software.
The last security issue is security, privacy, and use policies governing company and employee information and communications. These include password policies, limitations on copying and transfer of information, e-mail security and use policies, web surfing restrictions, and other use policies. Varian comments that "The big problem with security is not the hardware or the software, but the user. The users just haven't developed the habits that are going to be appropriate for living in that electronic environment [Varian, 1996, p. 45].
Two issues are at stake concerning employee security and use. First, the security of the company's system must be protected from accidental or intentional compromise. Passwords of dismissed employees should be immediately revoked. Since the vast majority of employee security problems are accidental, the network should be rigorously safeguarded from accidents. Second, the manner in which employees are expected to behave while online can affect communication with customers and resource allocation. The consequences of violating policies and reasons for their importance should be made plain. Internal or external circulation of material that might be considered sexually explicit or racially discriminatory could draw the agribusiness into unwanted legal proceedings since e-mail can be preserved forever for harassment claims. Employees should also be told what appropriate e-mail and Internet use is, and whether their use has any expectation of privacy.
4.9.9 E-Agribusiness, E-Commerce, and Customer Service
One of the most active areas of the Internet and hypercommunications in general is e-commerce. Estimates of the dollar value of e-commerce differ dramatically. In March 2000, the U.S. Census Bureau reported that retail e-commerce sales in the fourth quarter of 1999 reached $5.3 billion [U.S. Census Bureau, Monthly Retail Trade Survey, March 2, 2000]. While the Census Bureau admitted that this amount represented a scant 0.64 percent of total retail sales for the period (estimated at $821 billion), by some estimates e-commerce revenues grew to four times what they were a year earlier [Nua Internet Survey, 110(1), January 24, 2000]. Forrester Research estimated that the value of U.S. business-to-business e-commerce alone stood at $109 billion in 1999, with the total expected to climb to $1.3 trillion for 2003 for business-to-business and $108 billion for retail [Nua, 2000].
Depending on the definition, e-commerce encompasses electronic trade of all kinds resulting from web sites, online auctions, Extranets, and other sources. The definition is beginning to be more difficult as firms are merging web sites and IP CTI call center operations into simultaneous shop by web and by telephone services. Whatever the precise definition, the origins of e-commerce strategy arose from the e-business model first suggested by IBM in 1997. IBM's official definition of e-business is "The transformation of key business processes through the use of Internet technology" [IBM, 1999]. These business processes affect two flavors of e-commerce, business to consumer e-commerce and business-to-business (B2B) e-commerce.
E-commerce covers a great deal of territory. The economic issues that formed the source of the information economy were covered in Chapter 2, while some economic and technical aspects of hypercommunications were covered in Chapter 3. Elsewhere in Chapter 4, a discussion of what hypercommunications are brings in more of the e-commerce picture. While this short discussion could expand to encompass a Chapter of its own concerning the details of how specific hypercommunication services and technologies drive e-commerce, it makes more sense to cover the broad philosophy. Many of the details have been covered by the chapters just mentioned.
According to IBM, the focus of e-business is "on business, not technology". However, new technologies probably drive e-business just as much as e-business drives technology. Indeed since the ITU maintains that "today technology is in search of applications", innovative e-agribusinesses who discover e-commerce applications of new technologies could profit handsomely [ITU, 1995, p. 8]. However, IBM focuses on the business viewpoint rather than the engineer's when it names four stages of an e-business cycle.
The first stage of IBM's e-business cycle is "to transform core business processes" [IBM, 1999]. This means that an agribusiness (at the very least) must computerize and automate transactions with its largest buyers and its largest vendors. On the B2B side, core business tasks are included within two philosophies, EDI (Electronic Data Interchange) and SCM (Supply-Chain Management).
EDI encompasses data integration, application integration, and middleware integration. The data to be integrated include such things as purchase orders, requisitions, and invoices through online processes. Application software can be integrated so separate companies can work in concert through common sales tracking, inventory management, accounts receivable and payable, and general accounting. Middleware allows one business' software to communicate with another business'. EDI is most cheaply implemented through an Internet VPN, though performance issues can outweigh cost savings. Private data networks such as managed frame relay may be preferred. However, EDI offers limited information sharing and thus does not encourage e-business integration as well as SCM does [3com, 2000].
SCM (Supply-Chain Management) is an especially important B2B philosophy for agribusiness. SCM approaches seek to get parties at all levels of the marketing chain (retailing, raw materials, manufacturing, processing, distributors, wholesalers, as well as transportation and shipping) involved through a common B2B network. SCM involves the "planning and control of the flow of goods and services, information, and money electronically back and forth through the supply chain" [3com, 2000, p. 15]. SCM systems may require elaborate security layers and may therefore, rely less on VPN Extranets and more on private data networking technologies.
SCM can be especially important to B2B agribusiness applications for several reasons. First, the high perishability and short market windows of many agricultural products such as produce makes rapid communication and coordination from farm to retail important. Secondly, the tendency for many marketing functions to lie between the farmer and the consumer means there can be more opportunity for miscommunication, error, and financial default. SCM can help prevent such problems or when they do occur, help to resolve them. Third, the ability to incorporate SCM into agricultural marketing may improve efficiencies in the structure of the marketing chain, possibly cutting levels with better information and allowing consumers to pay less or get more.
A fourth reason SCM is important to agribusiness comes from the importance of farm to consumer information to accompany products through the distribution chain for food safety or other reasons. For example, consumers may require information about the source and history of organic produce. Batch information can help products to be pulled rapidly and investigations completed more quickly in food safety scares. Another example concerns the use of genetically engineered crops. Finally, SCM is often required for small organizations in order to do business with large buyers. For example, if a grower wants to sell nursery plants to Wal-Mart or other large chains directly, he may have to do business the buyer's way through an SCM.
On the retail e-commerce side, transforming core business processes requires that e-agribusinesses realize that, in addition to providing a good, they are providing a service, the e-commerce experience. Customer satisfaction is a function of the quality of the good or services the e-agribusiness sells, pricing, and the quality of the e-commerce application (service) that consummates the sale. One way of thinking about this is through the disconfirmation model where the customer's evaluation of service quality is a function of customer experience minus expectations [Iacobucci, Grayson, and Ostrom, 1994].
An e-business extension of disconfirmation is that customers will be satisfied and likely to be returning customers if their online purchase experience is greater than their expectations of buying online. Even if the good or service is of excellent quality, customers may be still more disappointed than satisfied. Disappointment can occur even with a quality product if orders arrive late, are never received, are billed twice, if the web site crashes, or if consumers get the BSOD during the ordering process. The agribusiness may be powerless to affect consumer expectations since consumers may expect the same kind of rapid shipping response or shopping cart program used by amazon.com or other firms they have online experience with. However, it is possible that the comparison might be positive given Andersen Consulting's finding that 25 percent of all attempts to purchase an item failed on the 100 top retail web sites during the Christmas 1999 season [Nua, 2000].
The second stage in IBM's e-business cycle is "building flexible, expandable e-business applications." [IBM, 1999] This has been touched on during 4.9.4 (web design, programming, and maintenance). For most agribusinesses, at least until their e-agribusiness efforts succeed, the applications and programming to be used in e-commerce will be developed by outside programmers. As with hiring a web designer or selecting an ISP, agribusiness managers should ask around for references and always view as many working samples of an e-commerce programmer's sites as possible.
Flexibility and expandability of applications refers to more than programming. Agribusinesses have to make sure their web site design and e-commerce applications are flexible enough to handle their needs. A shopping cart function may have to be flexible enough to allow overseas orders, orders from one customer shipped to multiple addresses (such as gift fruit or flowers), and other expected (or unexpected) variations from the typical single order. One florist's shopping cart application required customers who were ordering multiple bouquets (as much as forty percent of their business) to enter their name, address, and credit card number repeatedly rather than once. Internet shoppers may have tighter time constraints than other customers so they are likely to be less patient with inflexible ordering schemes. Expandability concerns whether the e-commerce system can expand to handle more individual items, a greater number or orders, retail and wholesale orders, password protection, etc.
The third stage is "running a scalable, available, safe environment" [IBM, 1999]. Clearly, this refers to the 25 percent failure rate obtained by Andersen Consulting which suggests that "sites crashed, order forms corrupted, and goods never arrived" [Nua, 2000]. In addition to the obvious implications for the web site Internet connection this stage of the e-business cycle also concerns security, privacy, and how quickly the e-commerce site could expand should it prove more popular than expected.
The fourth stage is "leveraging knowledge and information" the business has "gained through e-business systems" [IBM, 1999]. There can be an enormous amount of information about what products sell better, to whom, as well as sources of B2B strategies and tactics. Transaction logs need to be examined along with web site management statistics with an eye towards using the information they contain to refine the e-commerce program.
4.9.10 New Media: Broadcast and Content Delivery
For the sake of consistency, new media, Internet broadcast and content delivery services and technologies need to be mentioned. However, there is little new material to cover. The main point of interest is the ability of agribusinesses to produce and unicast, broadcast, or multicast their own text, audio, and video content. There are also new opportunities to tie in with traditional media outlets (and their web presences) through traditional advertising, web site promotion, and highly targeted banner ads.
Until the Internet, broadcast, multicast, and other content delivery technologies meant traditional advertising-supported broadcasting and publications or possibly unit-priced products such as pay-per-view TV programs. Because of the comparatively low cost of production and transmission, new media allow almost any person or business to broadcast or multicast delayed, delayed interactive, real-time or real-time interactive content over the Internet. While spectral radio and television broadcasters have a limited supply of spectral bandwidth, virtual broadcasters may buy as much as they need according to a hierarchical set of cost constraints that depend on operational scale. Miles provides a webcasting-specific definition of bandwidth in 1998 as a:
measure of bits per second or transmission capacity of data sent over a particular wire, cable, satellite, fiber-optic, cable, interface, or bus. More bandwidth is needed to send faster and more complex data and assure accurate and real-time delivery. Thus, audio and video, datacasting, and webcasting require more bandwidth due to the complexity and swiftness of changes than does ordinary text or phone communications. So, the larger the bandwidth, the greater the quality and capacity of voice, video, or data. Bandwidth also refers to the range of frequencies that can be passed over a given channel. [Miles, 1998, p. 388]
Unlike TV and radio broadcasters, there is not even the vestige of notions such as "equal time" or "equal access" present in the almost unregulated frontier of cyber broadcasters. The agribusiness may use multi-media technologies to assume the roles of producer, advertiser, company trainer, and PR person.
The changes hypercommunication brings (through the Internet) to the traditional broadcast and content delivery market are summarized in Table 4-42.
Traditionally, the only way agribusinesses could get TV, radio, or newspaper attention was to buy an advertisement or try to place a news story. The interaction between an agribusiness and traditional media vehicles have been shaped by endless multimedia possibilities. Agribusinesses can use web site promotion tools and new kinds of advertising strategies (such as banner ads) to put their messages before Internet audiences of the traditional media as shown in Table 4-42. However, the value of the Internet goes beyond simply extending traditional broadcast and content delivery services to the new media.
Agribusinesses can use multimedia broadcast and content delivery over the Internet to deliver content they produce themselves as Table 4-43 indicates.
Most of the list in Table 4-43 has been explained and discussed in Chapters 3 and 4. Agribusinesses can create and distribute their own multimedia content directly or through their web sites. With multicasting, instead of sending information in separate messages and packets to each receiver, one message goes out but is broken up as it reaches its destination. Hence, the agribusiness can send relatively large amounts of content over its own Internet connection only once, but each member of the multicast group receives the entire contents separately. Multicasting requires a specialized infrastructure called the Mbone (Multicast backbone) that uses mrouters capable of transmitting multicast packets. Specialized software and protocols may also be necessary. For these reasons, not all ISPs are capable of supporting multicast so the agribusiness must find an ISP that can support multicast. However, viewers' machines (or other terminals) must be equipped with software that can display the programming.
As agribusinesses prepare new media content they are creating intellectual property. Peters describes a "value chain" of "productive relationships between creators and users of intellectual property. The first element is the creators or authors of the new media content. They may be employed by or hired by the sellers or publishers of such content, such as an agribusiness. Typically, as part of the creation process the agribusiness will purchase rights to air, transmit, and copy the material through the third element, intermediaries. Intermediaries may be mere carriers such as ISP or partners who help distribute the material for a portion of the proceeds. Finally, buyers such as libraries, other firms, or individuals (libraries), and end users (readers) complete the value chain [Peters, 1996, p. 139]. Over the Internet, there is seldom a guarantee that end users or buyers will not copy and exchange the material themselves, thus putting a premium on recovering production costs with the first distribution.
This section has hardly touched on all the possibilities new media have for agribusinesses. However, most of the advice given concerning other Internet services and technologies applies here. Specifically, content production is another make or buy decision that the agribusiness must evaluate after doing research about its own needs. An in-house production effort may require the purchase of expensive equipment and the hiring of experienced people. Hiring from the outside should be done using caution and research concerning past results the new media production firm has achieved.
Now that the chapter has explored what hypercommunications are in detail, it is time to offer an executive summary of what has been found that is of use to Florida agribusinesses. Other important material will be revisited in Chapter 6 when specific agribusiness needs and current market trends are covered.
The best way to explain what hypercommunications services and technologies are is by reviewing the Chapter in Figure 4-61 below. The seven layers of the OSI model are important regarding technical issues of private data networking and the Internet. However, for most agribusinesses now, the QOS levels (shown together with application level or value-added services in Figure 4-61) represent the best way to characterize the choices available to Florida agribusinesses.
The bottom of the figure represents the local, physical level that contains the agribusiness' own local communications network and equipment. Most agribusinesses have some computer equipment that may be used for private data networking or the Internet and telephone equipment used to connect with the PSTN. The larger an agribusiness is, the more dependent on technology it is, and the more it has to communicate over long distances, the sooner it is likely to try to converge voice, Internet, and data into a single hypercommunications network. On the agribusiness premises, convergence means that its currently separate voice, Internet, and data equipment and currently separate conduit will evolve into a unified whole.
However, even if technology allowed smaller agribusinesses to unify their networks and even if the CPE needed to do the job was available, a high-speed connection to access the advanced hypercommunication network of the future would be needed. In many places in Florida, this last mile connection is not yet able to handle convergence inexpensively if at all.
There are several reasons the promise of the future is hindered by the reality of the present. The infrastructure that connects a communications provider's POP to the agribusiness location varies from one location to another. Currently, wireline service may rely on the copper loop from the ILEC, a hybrid fiber-coax mix from the cableco, and the ability to connect directly to a fiber optic network. Wireless access can be fixed, nomadic, or mobile depending on the movement of the user. Of the three kinds of wireless access, only fixed terrestrial is likely to compete sufficiently with the wireline infrastructure. However, most providers (wireline or wireless) have not worked out all the kinks in urban areas to provide a single link to businesses. In rural areas, the situation is even less developed.
In the future, a particular agribusiness may be able to obtain high-speed network access from competing providers and services for each of these four sources. At present, many agribusinesses can obtain high-speed dedicated digital or circuit-switched digital access over copper from a single provider, the ILEC. Larger agribusinesses with offices located in urbanized areas may have more providers (ALECs) willing to serve them by reselling the ILEC's copper loop or by using a fiber network that bypasses the local copper loop. Suburban locations may have cable or DSL access. However, most of these access level services are sold separately for Internet, data, and telephony.
Once agribusinesses can obtain a single high-speed access level connection to transport level services (PSTN, ATM, and packet-switched networks) they will benefit from converged hypercommunication networks. Now, transport level services are available and affordable only by large, strategically-located agribusinesses. As communication needs change, technology improves, and costs fall, transport-level services such as ATM should become more available and more demanded by Florida agribusinesses.
Application-level services or value-added services are currently available at least at low speeds to most agribusinesses. New kinds of application level services and value-added services will increase the benefits of high-speed hypercommunication networking. Even if high-speed access to intelligent networks were available to all agribusinesses today, there have to be demonstrable business reasons to adopt convergence technologies today. It may be expected that the earliest agribusinesses to foresee innovative uses of hypercommunications will achieve supernormal profits from their use. This, in turn, will entice others to follow them. Just as with other technological changes, those who are slow to act may be left behind.
Now that a picture of what hypercommunications services and technologies are has been painted and some of the most promising ones for agribusinesses have been identified, the job is not over. Next, it is important to consider to what role location will play along with taxes and other government policies in allowing hypercommunications to reach rural locations.