Chapter 2

Foundations of the Information Economy: Communication, Technology, and Information

IT (Information Technology) producing industries (i.e. producers of computer and communications hardware, software, and services) . . . play a strategic role in the growth process. Between 1995 and 1998, these IT-producers, while accounting for only about 8 percent of U.S. GDP, contributed on average 35 percent of the nation's real economic growth.

In 1996 and 1997 . . . falling prices in IT-producing industries brought down overall inflation by an average of 0.7 percentage points, contributing to the remarkable ability of the U.S. economy to control inflation and keep interest rates low in a period of historically low unemployment.

IT industries have achieved extraordinary productivity gains. During 1990 to 1997, IT-producing industries experienced robust 10.4 percent average annual growth in Gross Product Originating, or value added, per worker (GPO/W). In the goods-producing sub-group of the IT-producing sector, GPO/W grew at the extraordinary rate of 23.9 percent. As a result, GPO/W for the total private non-farm economy rose at a 1.4 percent rate, despite slow 0.5 percent growth in non-IT-producing industries.

By 2006, almost half of the U.S. workforce will be employed by industries that are either major producers or intensive users of information technology products and services. [United States Department of Commerce, "Executive Summary", The Emerging Digital Economy II, June 1999]

Both Chapter 2 and Chapter 3 address why networked hypercommunications have originated from the digital information economy just mentioned. Chapter 2 explores three interactive foundations (communication, information, and technology) of the information economy. It ends with examples of a reality gap between technical hype and economic likelihood in the new economy that also depends on the characteristics of networks. However, an understanding of networks, the technical infrastructure and economic infostructure of the information economy, must wait until Chapter 3.

2.1 Introduction: the Continuing Relevance of Economics

Chapter 2 uses economics to produce three foundations of the information economy: communications, technology, and information. As a science, economics is often defined as the study of how society allocates scarce resources to cover humankind's unlimited wants. Economics involves core concepts such as constraints, relative prices, incidence of taxation, symmetry of regulation, and market power. While economics originated as "political economy" in industrial age Europe, the field's core is not out of date in a global information economy. However, the information and service orientation of the international marketplace, combined with business demand for practical inventions amid the hype of high-tech, has led to an increasing diversity of economic thought. It is argued that the times necessitate a re-examination of the artificial boundary between the order of conventional economics and the chaotic complexity of the marketplace [See Giarini and Stahel, 1989, p. 121]. At the same time, Liebowitz and Margolis warn that economists must be wary when:

So taken are we with these new technologies that we tend to treat these new inventions as sui generis, so different in essentials that we cannot even speak of them in the same terms we have used in the past. [Liebowitz and Margolis, 1995, p. 1]

To those who view economics as a static, mathematically verbose creature with rare applicability outside academic problems from the industrial age, the hypercommunications market is a trying testing ground for economic theory. It is said in the popular, financial, and trade media that converging communications technologies are so new, fast changing, and different from anything ever known before, that they defy standard economic analysis.

According to this view, using economics to study agribusiness hypercommunications is doubly irrelevant. First, some argue that conventional economics has lost its relevance to the information economy. Second, some argue that the information economy is not relevant to agriculture. For example, of the first argument Rawlins writes in 1992:

Classical economic theory is largely irrelevant to the early stages of a new information industry. Economics assumes that resources are finite and that there is enough time for markets to reach stability. Three things are wrong with this picture: information is not finite, there is no single stable point--there are many, and there is little time to reach stability before there is another major change. [Rawlins, 1992, node 29]

This argument appears to leave room for economics in later stages of the information industry, but an anti-economics vaccine prevents "later" from happening because exponential improvements in synergistic technologies never end; they lead only to new iterations of increasing growth. Perhaps (even in the new information economy) economics could at least serve in a supporting role via capital formation and financial market theories. However, again economics appears to be passé, because while "capital will remain important as a risk softener", knowledge "has become more important to continuous improvement," while financial markets "matter less and less to the economy" [Rawlins, 1992, node 29].

On the second irrelevancy, Rawlins, a computer scientist, expresses the popular view that standard economics (a synonym for classical in his writing) is relevant only to sectors such as agriculture that are separate from the information economy.

Standard economics applies to finite-resource markets like agriculture, mining, utilities, and bulk-goods. Such economics has little to say about information markets like communications, computers, pharmaceuticals, and bioengineering. These markets require a large initial investment for design and tooling, but enormous price reductions with increasing market growth. This growth is further compounded by positive feedback: with increasing market growth the production process gets more efficient, therefore returns increase. [Rawlins, 1992, node 29]

Apparently, conventional economics is useless in the brave new world of the information economy but still matters on the farm or in the food processing plant. Somewhat paradoxically, the information economy's power to alter economics does not make it to the agricultural sector. Presumably, this is because phenomena such as large initial investment in research, price decreases with increasing market growth, and positive feedback occur in Silicon Valley, but not in the San Joaquin Valley. New diagnostic and application techniques (Khanna, 1999) are but one suggestion of how high technology benefits production agriculture. Biotechnology, smart foods, and information about the safety, nutritional, and organic characteristics of food are examples of the importance of knowledge and information to agriculture as a whole. There are many more examples, such as the information and communication needs of horizontally and vertically integrated agribusiness firms.

The discussion of the relevance of standard or conventional economics has ushered in a vocabulary debate within economics itself about what to call the information economy and how to practice a more relevant economics. To some economists and many Wall Street analysts, a new, weightless economy where information goods and knowledge services are produced and traded via technology networks has already replaced the traditional economy. Thus, a "new" economics [Kelly, 1998] or a "weightless" economics [Kwah, 1996, 1997; Coyle, 1997; Cameron, 1998] naturally replaces conventional economics. However, other economists suggest that the information economy is a "network" economy that requires extension of traditional theory to new problems [Shapiro and Varian, 1998].

A body of economic thought is developing on new information age issues ranging from Internet economics to the economic role of technological change. Lamberton writes that economics will have to change.

The structural and behavioral changes conveyed by the term Information Age require the economist to leave the shelter of his Ouspenkian 'perpetual now'. The economics that survives will no doubt be less amenable to mathematical precision, and its policy counterpart will need to be more tolerant of the role of judgement. [Lamberton, 1996, p. xiv]

However, widespread use of IT and hypercommunications are helping to erase many of the reasons that the market does not correspond to economic theory. Perhaps (armed with better information) it is the economy that is catching up to economic theory, instead of economists (armed with new economics) who are catching up to the economy. Lags and adjustment periods are shorter while information and productivity are enhanced in a communication-driven information economy that better corresponds to theory.

Few would dispute that the view of the typical firm in the information economy has changed from the industrial age economic model. Until recently, many economic problems were cast from the point of view of a factory manager who produced goods in a manufacturing plant. Now, similar and different economic problems are cast from the point of view of an entrepreneur who produces services in an information facility. While many agribusiness problems are much like those of the classical factory manager, agribusiness is increasingly confronted with the high-tech entrepreneur's problems as well.

The technician-entrepreneur at the helm of a high-tech firm needs to respond quickly to a constantly changing environment. As an individual, the entrepreneur's focus may be fixed on technology rather than directly on consumer needs. Every technical detail of an operation may be etched in such a CEO's strategic thought, to the exclusion of tactical ideas about finance, marketing, or hypercommunications. Such an entrepreneur may be striving for greater technical efficiency without adequately considering price, demand, or other economic variables. It is tempting for firms operating in the information economy (agribusinesses included) to imagine that technical expertise alone is most important.

However, firms in the information economy still meet competitors in markets as they did in the industrial age. Furthermore, economic theory typically tries to predict market rather than individual behavior. Interestingly, in markets for all kinds of goods and services worldwide, relative prices still seem to matter. An agribusiness manager may not understand the hypercommunications market, and a hypercommunications entrepreneur may have no clue about the citrus market, but both understand their bottom lines. For this reason, in spite of the contention that it cannot keep up with an economy that is increasingly based on communications, technology, and information, economics has hardly lost its relevance. Economics may be more relevant to businesses whose objectives have been forever altered by the new realities of the information economy. New, fast changing, and different as hypercommunications are, old fashioned economics is at work in the form of constraints, relative prices, regulation, profits, costs, and market power within the information economy.

Throughout the information economy, (especially in hypercommunications) the views of engineers and computer scientists have resulted in new technologies, promising infant products, and exponentially growing high technology firms. This chapter will highlight technical aspects of production that are necessary for operation in the information economy. However, economic and marketing perspectives are also covered because non-technical is sometimes overlooked when the discussion centers on new hypercommunication technologies. The economic and marketing perspectives are essential to firms hoping to understand how hypercommunications fits in with existing business strategies. No single view is sufficient.

Chapter 2 opens with three fundamental concepts that provide the origins for the information economy, which serve as the foundation of hypercommunication networks. The first three sections present these concepts: hypercommunications (2.2), technology (2.3) and information (2.4). Rather than use narrow definitions, broad conceptualizations identify the inherently economic context of the information economy and the dominant role of hypercommunications. Then, section 2.5 examines limitations of what has been popularly described as the unlimited cyber frontier. It will be seen that the information economy does not lack constraints. Instead, it has different limits than the traditional manufacturing economy. These economic, behavioral, financial, and technical constraints set the tenor of the information economy and rein in super-optimistic predictions about new hypercommunication services and technologies. Section 2.6 is a short summary. Chapter 3 will cover the unique economic and technical properties of networks, and relate them back to the limitless cyber frontier notion. Taken together, Chapters 2 and 3 provide an answer to why hypercommunications are economically and technically important to agribusiness.

2.2 Communication, the First Foundation

In this section and the next two (2.3 and 2.4), the three foundations of the information economy, communication, technology, and information (defined broadly in Table 1-1 in Chapter 1) are conceptualized. Braithwaite (1955, p. 56) describes the process of defining something as being a logical construction of the idefiniendum (thing to be defined) in terms of the meaning of another (the definiens). In a highly technical field such as hypercommunications, there is an enormous burden to define terms. For each of the twelve essential terms from Table 1-1, the word or phrase definiendum cannot be easily defined with a definiens of a few sentences. Terms such as communication, technology, and information require a longer definiens because they are conceptually deeper than technical jargon terms or acronyms such as those presented in the glossary.

As Cohen and Nagel pointed out, "the process of clarifying things really involves, or is a part of, the formation of hypotheses as to the nature of things" [Cohen and Nagel, 1934, p. 223]. Perspectives from economics, communications, and computer science are spliced with a taxonomic treatment of the hypercommunications jargon. Therefore, useful definitions are conceptualizations that recognize each perspective in the synergistic whole. However, such conceptualizations are longer than telegraph-style definitions.

This section argues that communication is a process that includes models that are hypotheses as to the nature of things. To this end, hypercommunication and communication are defined and conceptualized in five ways. First, a literal definition of hypercommunication is presented (2.2.1). Next, communication is conceptualized through two traditional communication models the interpersonal model and the mass model (2.2.2). Third, a new hypercommunication model is compared with use of the interpersonal and mass models in standard telecommunication (2.2.3). Fourth, a comparison of telecommunication and hypercommunication by elements is made (2.2.4). Fifth, and finally, hypercommunication can also be conceptualized through the taxonomy of hypercommunication services and technologies (2.2.5) that will be used in Chapter 4.

2.2.1 Literal Definition of Hypercommunication

What are hypercommunications, or what is hypercommunication? Alan Stone discusses one "true", or "pure ideal" of hypercommunications in 1997:

Virtually any person who considers the future agrees that the world is in the process of major social and economic changes and that telecommunications is a driving force of those changes. If that is the case, the study of telecommunications is not simply the examination of one more sector, like pulp and paper, clothing, or automobiles. Nor is public policy for telecommunications just one more branch of public policy studies, like civil rights, airlines, or education. If the experts' projection of the future of telecommunications is a correct one, the sector will be the leading one in shaping our social, economic, and political futures. No reasonable person would attempt to predict the future with precision, but we can certainly surmise certain probable trends--the nearly uniform considerations of the experts do portend a dominating future for communications--domination so extensive that we call the sector hypercommunications. [Stone, 1997, p. 1, italics his]

A breakdown of the term hypercommunication into prefix and root gives further clarification. The prefix hyper- is defined as meaning "over, above, more than normal, excessive" [Webster's New World Dictionary, college ed., 1960, p. 714]. The opposite of hyper- is hypo-, signifying "under, beneath, below, less than, subordinated to". Thus, the status quo of communications is hypocommunications, below or beneath the developing world of hypercommunications.

2.2.2 Two Traditional Communication Models: Interpersonal and Mass

Communication signifies the transmission of a message from sender to receiver through a medium, subject to noise. As Figure 2-1 shows, a simple spoken or written message is transmitted one-way through the medium of air to a receiver through noise, represented by a cloud with lightning. The basis for the interpersonal model is from the science of communication, a field that studies interpersonal communication by voice, sign language, writing, gestures, physiology, and body language.

Figure 2-1: Interpersonal communication model

Historically in interpersonal communication, a message was either spoken or written. It was then delivered in person to a single receiver or group of receivers by voice through the medium of air or by letter, subject to the noise of chattering, interruption, and lack of attention. Interpersonal messages can be one-way or two-way, typically depending on the custom and sociology called for by the setting. For example, oral argument before the Supreme Court of the United States is two-way if and only if it pleases the Court. A final property of interpersonal messages was that written messages could be preserved verbatim (unlike spoken messages).

As language developed from simple grunts and gestures into Chomsky's modern transformational grammar, messages became more complex. As society became more specialized, so did communication. It took on non-personal forms as well. Storytelling, speechmaking, song, dance, painting, sculpture, and other kinds of expression apparently preceded the written word. Patrons of the arts began to exchange goods in return for various messages of artistic expression or entertainment works. When written, messages became more complex, subject to interpretation, and were necessary for governments, religions, and town commerce. Written messages became valuable enough that markets developed for paid messengers, and later, postage. In 1776, Adam Smith wrote that the postal service was "perhaps the only mercantile project which has been successfully managed by, I believe, every sort of government" [Smith, 1937, p. 770].

The invention of the printing press allowed mass production of written communication so that bibles, other books, newspapers, and magazines became relatively inexpensively produced forms of mass communication. The printing press added a new level of complexity to communication. The interpersonal communication model of Figure 2-1 has become a mass communication model shown in Figure 2-2.

The elements of communication changed when mass written communication became technically feasible. The sender became a specialized communicator such as a publisher, writer, or correspondent who created prepared messages. The message could be duplicated and sent (through the medium of paper or newsprint) to a group of recipients rather than a single person or small group. The receiver in the interpersonal communication model became a group of readers or subscribers. Communication began to be purchased per copy or by subscription. Then, advertising (the paid transmission of a message from a sender to a target group) came to exist alongside editorial writing. Because of the high cost of entry into publishing, there were now inherently more receivers (readers) than there were senders (publishers). The roles of editor, correspondent, reporter, and scientific writer as gatekeepers were established at this time.

Figure 2-2: Mass communication model

2.2.3 The Hypercommunication Model Succeeds Telecommunication

As technology developed, telecommunication evolved. The prefix tele comes from the Greek where it meant "far off" or "distant". Telecommunication differs from both interpersonal communication and mass written communication by the medium and form of the message. However, telecommunication continued to follow either the mass communication model or the interpersonal model depending on the medium used.

The first form of telecommunication was telegraph, followed by telephone, radio, and finally, television. The telegraph, telephone, and two-way radio were each closely modeled after the interpersonal model. Television and broadcast radio followed the mass communications model insofar as they were unidirectional, mass-produced, and subject to gatekeepers. While telecommunications has come to have a wider definition than telegraph, telephone, and television, new technological realities have spawned hypercommunication. Unlike telecommunication, hypercommunication is a blend of both the communication models shown in Figure 2-1 and 2-2.

Open interconnection and networking (which came originally from data communication) are the enabling technologies of hypercommunication. The new hypercommunication model shown in Figure 2-3 is a synergistic combination of Figures 2-1 and 2-2, resulting in an array of new communication elements. In Figure 2-3, the interpersonal model, the mass model, and telecommunication are combined with data communication.

Figure 2-3: The hypercommunication model is a synergistic mesh of networks combining new communication elements with the mass and interpersonal models

Additionally, positive network externalities are enhanced through open standards interconnection to yield a single mesh structure, a hypercommunication network. Telecommunications (already an ingredient which is the product of other, separate ingredients) is of course a major part of the hypercommunications pie. However, the finished product, hypercommunication is unlike telecommunication because it is an open system featuring unique positive (and negative) synergies among the networked parts.

There are three important ways the generalized hypercommunications model of Figure 2-3 differs from its predecessor models. First, hypercommunication is based on a common, interconnected network that consists of the full set of old transmission networks along with some new high technology networks. Second, hypercommunication allows for both old and new kinds of messages to be sent and received over that common network. In general, the networks and message types mesh so that each message type can travel from sender to receiver though one or all of the networks. Third, hypercommunication is based on new technologies that have redefined senders, receivers, distance, and noise so that the social and economic relationships of communication are synergistically more powerful than before.

The first way hypercommunication differs from its predecessor models is that it is a unified mesh of networks rather than a set of separate unconnected networks. In Figure 2-3, the hypercommunication model translates the old concept of communications media into a new concept of an open, interconnected communication networked medium carrying many message types. Previously disparate networks are shown by the lines moving from the lower left to the upper left of the diagram. These include the PSTN (Public Switched Telephone Network), cable TV networks, the Internet and other data networks, together with a variety of wireless networks (broadcast TV and radio, cellular, etc.). Before the advent of the hypercommunication model, each network was a separate medium generally based on the interpersonal or mass model. Both technology and deregulation now allow these previously separate networks to interconnect so that interpersonal and mass communication blends with the Internet and other new technologies. The hypercommunication model enables single senders to transmit messages (of any type) to single or mass receivers through the integrated mesh structure. Similarly, hypercommunication allows mass senders to transmit content to an audience of mass receivers or to transmit customized interactive content to individual receivers.

A second difference between the hypercommunication model and its predecessors is it permits many message types to be created and carried through the mesh of networks. In Figure 2-3, message types are shown going from the upper left to lower right. Message types may be generally based on content (voice or data) or directionality (push, pull, or interactive). Voice messages include two-way telephone calls, conference calls, automatic calling, audio webcasting, voice mail, and voice e-mail. Data messages include numbers, text, binary computer code, video, and graphics. The voice-data distinction itself is a vestige of a fading distinction between analog and digital networks. Voice communication traveled on analog networks and data traveled on digital networks. In reality, currently almost all voice messages are transmitted digitally as data.

Push messages include text and binary streams, graphics, and other content automatically sent to a single receiver or (more typically) to groups of receivers. Push messages may be voice, video, facsimile, text, graphics, or a mix. Push messages include subscribed webcasts, e-mail auto-responses, news crawls, quotation services, or they can be annoying automatic teledialing, junk faxes, and unwanted spam. Video streams include webcasting, TV and cable programming, live video auctions, video conferencing, and webcam transmissions. Pull messages are one-way, often invisible to communication users. Examples of pull messages include caller ID, call blocking and Internet cookies. Interactive messages are two-way and "conversational" in nature. Examples include interactive websites, telephone conversations, and video conferencing.

A third way the hypercommunication model differs from its predecessors is that it involves new roles for both senders and receivers, along with a host of new technological and economic characteristics. Technological details are best left until Chapter 3, but consider a few key economic differences between hypercommunications markets and telecommunications markets. Begin by noting the corners of Figure 2-3 and the box at the bottom: mass senders, mass receivers, single senders, single receivers, and Intranets. Mass senders and mass receivers are terms used to symbolize the role of the mass communication model in hypercommunication. Single senders and receivers symbolize the role of the interpersonal communication model in hypercommunication. The concept of an Intranet symbolizes communication within an organization. Hypercommunications use the same network for mass and interpersonal communicators.

For single senders and receivers, the hypercommunication model of Figure 2-3 offers several differences from telecommunication. First, hypercommunication rests on an interconnected mesh network so that telephone customers, cable TV customers, and ISP customers have new communication choices that do not depend on the monopolistic structure of the telecommunications carrier market. Second, the term "single" (synonymous with small) provides less of a barrier to access and entry under the hypercommunication model due to deregulation, interconnection, and the role of the Internet. The fixed and variable costs of interpersonal and mass communication have fallen dramatically in the integrated hypercommunications network. Distance and geographical boundaries are also less of a barrier. For the cost of a computer and peripherals, small firms and single individuals can create and transmit hypercommunications messages on a global scale because costs are dramatically lower.

For mass senders and mass receivers, the hypercommunication model differs from telecommunication in several ways. First, there are more types of messages. Second, there are more kinds of receivers (receiver as a device and an audience member) than under traditional telecommunication because the network interconnects previously separate media. Third, distance is less of a barrier to communications owing to digitization. Fourth, the direction, timing, and latency of hypercommunications transmission are more variable than those of traditional telecommunications transmission are. Fifth, because it is based on computer hardware, software, and the digitization of information, the hypercommunications model permits senders and receivers to store, copy, summarize, and re-use information in an unprecedented way.

2.2.4 Comparison of Telecommunication with Hypercommunication

A definition of communication by elements underscores both the immense potential of hypercommunication and the inherent differences between telecommunication and hypercommunication. Legally, there are over 100 definitions of communication in current use according to Ploman (1982), so while any definition will be imperfect, more detail is necessary. A reasonably detailed definition of modern communication covers nine elements in Table 2-1:

Communication is an (A) information processing activity or exchange process involving: (B) signal transmission of (C) message types (text, voice, video, data, content), (D) from a sender or senders through (E) space and (F) time (real time, live streaming, and delayed streaming), using a (G) transmission network to (H) a receiver or receivers or an audience (one person, several people, millions of people), (I) subject to noise and incompatible standards.

The exchange of information among people, organizations, or devices is an information processing activity or exchange (A) requiring that information (to be defined in 2.4) be input, patterned, processed, and coded into message form. Technology (to be defined in 2.3) permits element (B) transmission (or delivery) of hypercommunication signals using the PSTN, the Internet and other computer networks, wireless networks, or broadcasting networks.

Table 2-1: Elements of hypercommunication compared to telecommunication
Element Telecommunication Hypercommunication
A Information processing or exchange activity Information may be filtered by a gatekeeper, processed mentally Gatekeepers less important, mental processing more important, computers and devices process and filter information
B Signal transmission Traditional analog content via digital or analog signal Digitized content transmitted via digital signal
C Message types (text, voice, video, data) Separate interpersonal and mass media, ability to record and re-send messages Choice of any message type, more latitude for recording and re-sending, less privacy and security
D Sender or senders Emphasis on a single sender Emphasis on group or network senders, technology can change sender's anonymity
E Space Distant communication possible, but cost depends on distance Distant communication technically simpler, cost less dependent on distance, infrastructure required
F Time (real time, live, delayed streaming) Little choice as to time or timing, some ability to record More choice as to time (of sending and receipt) and timing (latency, delay)
G Transmission network Separate networks for separate services Separate technologies used to create a mesh of networks with interconnected services
H Audience (one person, several people, millions of people) Typically either a mass audience or a single receiver Sender can choose audience size from a single recipient to highly targeted small audience
I Noise and incompatible standards Interference, static, cross talk, jamming Incompatible software and hardware standards and protocols, delay and congestion

The variety of message types (C) has already been introduced. With both telecommunication and hypercommunication, a given message may be stored, copied, and re-used, while in other cases copyright laws, technology itself, or other barriers prevent storage, copying, or re-use. Hypercommunication makes copying, changing, and retransmission of messages easier and cheaper than was possible with traditional telecommunication. Technical material in Chapters 3 and 4 will highlight the greater range of message types hypercommunication offers over telecommunication.

The sender (D) may be a person, business, organization, or government, if the sender's identity is known at all by the audience. Technology and deregulation enable a larger number of senders to inexpensively hypercommunicate than to telecommunicate. Open protocols and inter-networking allow a larger number of senders to join a larger common network than is possible with telecommunication.

Pricing of telecommunication traditionally depended on whether the mass model or interpersonal model was followed. Interpersonal telecommunication (such as telephone) was often priced based on distance through space (E). Mass telecommunication was supported by advertising and subscription. Improved technologies allow messages to travel farther through space over a digital networked infrastructure at a relatively lower cost.

Unlike telecommunication, hypercommunication offers choices concerning communications time and timing (F). Delivery can be instant (real-time), delayed, or archived. A message can be re-sent until the recipient is available or the sender can be notified automatically when the message is received.

The differences between telecommunication networks and the hypercommunication network (G) have been introduced already. Unlike the radio and television networks of telecommunications, no single organization could "own" the entire hypercommunication network. This is because architecture is open and the emphasis is on interconnection. Chapters 3 and 4 will discuss many technical reasons that the hypercommunication network differs from telecommunication networks.

The audience (H) may consist of one person or many, with message delivery being push or pull, instant or delayed, simultaneous or non-simultaneous. The audience may pay for the message through direct subscription, indirectly though exposure to advertising, by carrier access costs, or not at all.

Noise (I) has always meant electrical interference, static, crosstalk, or other barriers to clearly hearing, seeing, or understanding a message. Now, it may also include a host of hardware and software factors such as network failures, congestion, and operator error at both ends that prevent delivery of messages on time, or at all. Finally, incompatibility of standards is the ultimate form of noise because communication is prevented from occurring at all, or is considerably delayed--so that the marginal cost outweighs the marginal benefit.

2.2.5 Specific Hypercommunication Services and Technologies

An operational definition of the hypercommunications sector is given in Chapter 4 where specific categories of services and technologies are presented. These (now partially separate) telecommunication and data communication sub-markets are converging into a hypercommunications sector. In the four hypercommunications sub-markets, services must be separated from the wireline and wireless technologies that provide them, sometimes a difficult task. Hypercommunication services are provided to customers by carriers or content providers using technologies that are often invisible to customers.

The first sub-market includes traditional telephony services, local and long-distance calling, basic signaling (dial tone and ringing), coupled with traditional telecommunications technologies (switching, circuits, and local loops). In the past, these services had been closely regulated monopolies, but the 1996 TCA (Telecommunications Act) and other legislation has encouraged deregulation and competition. Real-time voice conversations messages occur between sender and receiver in traditional telephony or POTS (Plain Old Telephone Service), as traditional service is known in technical jargon. A technical overview of the POTS PSTN will be found in Chapter 3.

The second sub-market includes newer enhanced landline and wireless telecommunications services. Services in this category range from caller ID, call waiting, and PCS to elaborate CTI (Computer Telephony Integration) systems offering agribusinesses hundreds of options of connecting telephones and computers. These services are supported by enabling software and hardware transport technologies such as AIN, DS-100 switching, and SS7 signaling, and electromagnetic carrier waves. PBXs (Private Branch Exchanges) and other computer and telephone hardware and software must be purchased by the agribusiness to take advantage of many enhanced services. Enhanced services were originally developed by local telephone monopolies or ILECs (Incumbent Local Exchange Carriers) but now are also available from competing ALECs (Alternative Local Exchange Carriers) authorized under deregulation. Enhanced services are based on existing traditional services and are interconnected with traditional service networks (especially the PSTN). Enhanced services have been less regulated than their traditional counterparts, but the TCA affects these services almost as profoundly. Enhanced services and technologies send voice, signaling data, limited text, and paging messages.

The third sub-market includes private data communication and networking services such as: Intranets, frame relay, ATM (Asynchronous Transfer Mode), and SMDS (Switched Multimegabit Data Service). Computer and networking technologies such as routers, cabling, and other CPE (Customer Premises Equipment) must be purchased by agribusinesses using private data services. Bandwidth and equipment are available from ILECs, but because this category is largely unregulated, services are also available from ALECs and ISPs. Furthermore, firms in the conduit and hardware businesses often strategically partner with hypercommunication carriers to enable one-stop shopping. This is the only hypercommunication sub-market almost entirely made up of business customers. Currently, most messages are digital data communications, including the exchange of text and binary files. However, technical convergence is so rapid that private networks are becoming flexible enough to include all message types. Chapter 3 contains technical and economic foundations of computer networking.

The fourth hypercommunication sub-market is the broad Internet sector. This area includes Internet access and bandwidth, e-commerce, and Internet QOS (Quality of Service). The Internet has increased awareness of transmission variables (such as speed, capacity, and delay) in every hypercommunications category. The Internet differs from private networking because of circuit ownership, protocols, and ubiquity. Internet access is a more open architecture than private networking services because a portion of the loop is through an ISP connection to a public backbone rather than exclusively dedicated for private use. Internet technologies substantially overlap other hypercommunications categories to include IP telephony, live two-way Internet video, and webcasting. Messages types carried by the Internet include familiar forms such as e-mail, web page content, and file transfers. Internet messages also include less familiar forms like voicemail, Internet telephony, live video feeds, interactive chatting and cyber shopping. Many connections to the Internet are still made via narrow-band modems via the PSTN to ISPs. However, data transmission technologies used in private networking (frame relay, ATM, DSL, and broadband) provide greater speed and convenience for Internet access, increasing availability and lowering cost. The Internet has successfully resisted most regulation.

All four sub-markets rely on a variety of wireline and wireless transmission technologies to operate. Additionally, protocols and standards are needed to help hypercommunications firms and their customers avoid the deadweight loss of searching for technical specifications to enable widespread interconnection. Protocols and standards (such as TCP/IP and SS7 signaling) evolve from scientific agreement, governmental edict, or from industry bodies and competition. Protocols and standards allow the broadest possible market to be formed as well as allowing pricing and definition of market services.

Three points conclude this section on economic conceptualizations of hypercommunications. First, hypercommunications categories are asymmetrically regulated. There are various federal, state, and local government agencies (such as the FCC and FPSC, Florida Public Service Commission) with regulatory control. These regulators affect what services will be available, where, at what price, and with what kinds of taxation or subsidy. However, regulation does not apply uniformly across every hypercommunications category thereby distorting individual categories and the entire hypercommunications sector. The telephone and Cable TV markets (even with recent deregulation) are far more regulated than the Internet access market for example. Government is also involved with anti-trust enforcement and legislation covering market structure, conduct, and performance in hypercommunications as in other industries. The recent joint action of the federal Department of Justice and numerous state Attorney Generals against Microsoft is a current example, while the breakup of AT&T's Bell System by the federal courts is an historical example. Chapter 5 is dedicated to policy and regulation.

Second, hypercommunication can often be quantified for economic and technical analysis. Hypercommunication can be characterized by traffic shape, capacity, delay, speed, and other quantitative analyses. Chapter 3 covers the technical and economic foundations of networks, the source of these measures. Chapter 4 covers the bandwidth, data rate, throughput, and a number of other quantitative measures known as QOS (Quality of Service) variables that are unique to hypercommunication.

The third environmental factor, the qualitative side to hypercommunication, is important economically and can be easily forgotten if hypercommunication is viewed from a pure engineering perspective. A particular message may be considered invasive or undesirable even when flawlessly sent and received in physical terms. The concepts of a message, medium, noise, attention, and a receiver are likely to be looked at differently by a network engineer's technical objectives than from a business, economics, or communications orientation. Additionally, hypercommunication has important social implications with economic repercussions. Issues of security, fraud, privacy, and freedom of paid and non-paid speech will have to be addressed for markets to function.

Now that hypercommunication has been conceptualized as the first foundation of the information economy, it is time to conceptualize the other two: technology (2.3) and information (2.4).

2.3 Technology, the Second Foundation

Technology, the second of three foundations of the information economy, is an important catalyst for the entire information economy, not just for the hypercommunication sector. Using economic and technical literature, this section defines and examines technology generally to better understand the economic roles it plays in the information economy. Coverage of specific network technologies (and related economic precepts) will be found in Chapter 3. Specific hypercommunication technologies are discussed in Chapter 4.

Technology, (from the Greek, technologia), means: "1) the science or study of the practical or industrial arts, 2) the terms used in science, art, etc. 3) applied science" [Webster's New World Dictionary, college ed., 1960, p. 1496]. To a firm, technology "is a basic determinant of a company's competitive position" [Ashton and Klavans, 1997, p. 8]. Technology includes current product features and performance, "the capacity, yield, quality, and efficiency of production processes". Technology also "helps determine the unit costs of making and delivering products and the nature of capital investments", while serving as "the source of new products and processes for future growth". Finally, technology is "a valuable intelligence focus" that "can be a direct source of business revenue" [Ashton and Klavans, 1997, p. 8]. Used strategically by business, technology has "the potential to create or destroy entire markets or industries in a short time" [Ashton and Klavans, 1997, pp. 8-9]. More broadly, others argue that technology comprises all problem-solving activities including how people and organizations learn, and the stock and flow of knowledge [Cimoli and Dosi, 1994].

The organization of section 2.3 follows the roles played in the information economy by technology as identified in Table 2-2. First, it is important to understand how sources of technological change are best identified and modeled in economics. There are four chief schools of thought or research agendas: induced innovation, evolutionary theory, path dependence, and endogenous growth [Ruttan, 1996; Homer-Dixon, 1995]. Ruttan argues that while each "agenda has contributed substantial insight into the generation and choice of new technology", the lack of co-operation and fresh results has caused the foursome to reach a "dead-end" [Ruttan, 1996, p. 2].

2.3.1 Research Agendas in the Economics of Technological Change

Differing views about the sources and economic implications of technological change are responsible for considerable differences in economic thought among the four agendas. Differences range from minor adjustments in conventional microtheory in the induced innovation school to the "paradigm shift" of the evolutionary agenda, to the completely "new" economics demanded by path dependence theorists. The endogenous growth agenda has brought macroeconomic thought into the microeconomics of technology. Auerswald et al. argued in 1998, "macroeconomics is ahead of its microeconomic foundations" because of the endogenous growth agenda's macroeconomic models of technological change in production. The term mesoeconomics refers to the application of macroeconomic endogenous growth factors of production such as human capital, technical knowledge, and other non-conventional inputs and outputs to microeconomics.

The first agenda, the induced innovation literature, argues that technical change is a "process driven by change in the economic environment in which the firm finds itself" [Ruttan, 1996, p. 1]. According to Christian, "Models of induced innovation describe the relationship between" production (summarized by factor-market conditions and "the evolution of the production processes actually used") and "the demand for the finished product" [Christian, 1993, p. 1]. The relative scarcity of resources and changes in relative prices induce or guide technological change under this view.

Work in induced innovation has concentrated in several areas. One analytical strain consists of macroeconomically oriented growth theoretic models [Kennedy, 1964, 1966; Samuelson, 1965]. These models were developed to examine stable shares of aggregate factors, in spite of intensive substitution of capital for labor in the U.S. economy. The demand-pull strain holds that changes in market demand lead to increases in the supply of knowledge and technology. Micro and macro studies of demand-pull focused on how the location and timing of invention and innovation were stimulated by demand [Griliches, 1957, 1958; Schmookler 1962, 1966; Lucas, 1967]. A supply-push orientation holds that changes in the supply of knowledge lead to shifts in the demand for technology.

Rounding out the induced innovation work is a fourth strain, factor-induced technical change, based on Hicks' idea that "a change in the relative prices of factors of production is itself a spur to innovation and inventions" [Hicks, 1932, p. 124]. Along with Ahmad's [1966] paper, the Hicksian ideal that relative prices matter launched iterations of microeconomic models. Economic historians (Habakkuk, 1962; Uselding, 1972; David, 1975; Olmstead, 1993) and agricultural economists (Hayami and Ruttan, 1970, 1985; Thirtle and Ruttan, 1987; Olmstead and Rhode, 1998) have worked in this fourth area. Microeconomic models were used to show empirically that exogenous changes in relative prices induced innovation, moving firms away from costly inputs toward relatively cheaper ones.

The second agenda, the evolutionary perspective, stems from the recognition by Alchian (1950) and other economists that the behavior and objectives of firms are more uncertain and complicated than received theory allows due to the inevitability of mistakes. The genesis of this approach is credited to Schumpeter's later (1943) recognition that synergistic feedback between R&D and innovation could allow certain firms to influence demand [Freeman, Clark, and Soete, 1982]. At the firm level, "the crucial element is full recognition of the trial-and-error character of the innovation process" [Nelson, Winter, Schuette, 1976, p. 91].

Evolutionary models use a "black box" behavioral theory of the firm and its larger operating environment. The "black box" represents the firm's decision objectives and rules in a search for technological modifications that begins in the neighborhood of existing technologies. Alterations in conventional microtheory (bounded rationality among heterogeneous agents), collective interaction, and continuously appearing novelty create an economic world of emergent, unstable dynamic phenomena [Dosi, 1997]. The firm itself relies on historical "routines" and "decision rules" rather than "orthodox" profit maximization or other global objective functions [Nelson and Winter, 1982, p. 14]. Cimoli and Della Giusta argue that "the standard statistical exercise of fitting some production function" could still be done under the evolutionary approach. However, "the exercise would obscure rather than illuminate the underlying links between technical change and output growth" [Cimoli and Della Giusta, 1998, p. 16].

The third agenda, the path dependence model arises from the idea that technological change depends on network effects and sequential paths of development [David, 1975; Arthur et al., 1983]. Under this view, many candidate technologies interact with random events in the early history of a new technology. The result is a winning technology (not necessarily the optimal one) that locks in an economic path, down which future technological iterations march. Under conventional thought, well-behaved technologies have diminishing marginal returns in individual factors, constant returns to scale for all factors together, and stable equilibria.

Unlike conventional convex technologies, under path dependent theory the networked products, organizations, and markets can produce multiple equilibria, globally increasing returns to scale, and disobey the "law" of decreasing marginal returns. Increasing returns "act to magnify chance events as adoptions take place, so that ex-ante knowledge of adopters' preferences and the technologies' possibilities may not suffice to predict the 'market outcome' " [Arthur, 1989, p. 116]. Path dependencies can "drive the adoption process", causing domination by "a technology that has inferior long-run potential" [Arthur, 1989, p. 117]. Contributions from the path dependent school to network economics are covered in Chapter 3.

The fourth agenda originates in the endogenous growth macroeconomics literature as advanced by Romer (1986, 1990). Ideas about the non-convexity of technology from endogenous growth have seeped into micro thought through ideas such as "knowledge is assumed to be an input in production that has increasing marginal productivity" [Romer, 1986, p.1002]. Nobel Laureate Robert Solow demonstrated in the 1950's that knowledge was a motor for economic growth [Solow, 1956, 1957]. Based on Solow's results, a "narrow" version of endogenous growth holds that the main source of productivity increase (output per worker or dollar of capital) depends on the progress of science and private sector R&D expenditures. A "broad" version posits an "indirect relationship between technological improvement and economic activity" based on learning by doing or reorganization of the production process [DeLong, 1997, p. 12].

Table 2-2 shows the roles that technology plays in the information economy. The coverage given each role differs, emphasizing those where hypercommunication and agribusiness are most influenced. Each of the four research agendas is used in the discussion. However, conventional economic models are used to establish each topic.

The discussion is separated into roles because technology is often used with only one of these meanings in mind. When economists, engineers, or agribusinesses discuss the role of technology in a firm or industry, they are often discussing different concepts. Robert Solow admitted the truth of this among economists in 1967, "the economic theory of production usually takes for granted the 'engineering' relationships between the inputs and outputs and goes from there" [Solow, 1967, p. 26]. However, "new" economists argue that analyses of technology require that technical and economic relationships in the industry and within the firm be considered in addition to interactions among them.

Table 2-2: Roles played by technology in the information economy
Sec. Role Topics
2.3.2 Production: technical aspects MPP and MRTS, technical factor interdependence, technical factor substitutability, returns to scale and scale neutrality, returns to scope
2.3.3 Production: economic aspects Returns to size vs. returns to scale and size neutrality, non-homothetic technologies and modifications in production economics, economic interdependence of factors, factor bias and augmentation
2.3.4 Managerial Internal, allocative, and dynamic efficiency, operational, tactical, and strategic flexibility, other dimensions (scope and system) measurement of technical change
2.3.5 Supply Decreasing cost industries, technology treadmill, technology as public good (technology spillovers)
2.3.6 Demand Demand shifts, diffusion of innovation, pro-competitive, anti-competitive, and neutral effects
2.3.7 Technology-Information linkage Composite functional relationship between technology and information

2.3.2 Technology and Production: Five Technical Aspects of Invention

The first major role played by technology in the information economy is the technical efficiency of a new production technology. In conventional economics, a firm's production function encloses what Varian calls the "production set", or "set of all possible combinations of inputs and outputs that are technologically feasible" [Varian, 1987, p. 310-311]. Beattie and Taylor define a production function as:

a quantitative or mathematical description of various technical possibilities faced by a firm. The production function gives the maximum output(s) in physical terms for each level of the inputs in physical terms. [Beattie and Taylor, 1985, p. 3]

However, a discussion of the technical and engineering roles of technology in the information economy (even in the realm of agribusiness) requires a broader perspective than the production function of conventional economics for three reasons. First, new technologies (IT, networks, and biotechnologies) often do not behave along traditional economic lines. Second, the conventional view of the agribusiness firm differs from the emerging reality of the transgenic firm (Baarda, 1999) because implicit and explicit vertical and horizontal integration takes unconventional forms. Third, the economic and technical literature offers many helpful new approaches.

One of the benefits of conventional economics is that, when practiced well, it can simplify complex systems into elegant models. A variety of details (covered in this chapter and the next) such as time compression, information overload, communication, information, and network externalities have received specialized treatment in the economics literature. However, such important tangencies to mainline economic thought lack wide audiences. The results are hard for agribusinesses to use unless presented in a recognizable form such as the production function. Importantly, while some arguments concerning the usefulness or existence of the production function will be presented, the production function (when considered more broadly than it often has been) retains enormous value in analyzing technology's economic and engineering roles. Perhaps the value stems from the fact that the concept behind a production function is understood by economists, farmers, and engineers. Possibly, the production function's continuing utility comes from being a common baseline against which the roles of new technologies can be compared.

2.3.2.1 Technical and economic distinction

The distinction between technology's direct role in altering the technical side of production (covered in this section) and technology's indirect role in altering the economics of production (to be covered next in section 2.3.3) could not be more important. That distinction is often at the heart of the disparate philosophies of the engineer and the economist regarding the impact of technology. Understanding this economic-technical distinction makes economics more consistent with the information economy in three ways.

First, on a micro level, it shows that Schumpeter's early distinction between innovation and invention need not be a demarcation criterion for economics [Schumpeter, 1934, Vol. 1, p. 84]. The early Schumpeterian view was that invention (unless it directly produced innovation) was an "economically irrelevant" experimentation in technical feasibility. Innovation (which did not require invention) included economic feasibility and economic efficiency in addition to mere technical efficiency. Path dependent and evolutionary economists argue that such an arbitrary distinction has lead conventional economics to faulty analyses of weightless technologies, production teams, and inter-firm alliances of the information economy.

Ironically, Schumpeter's early (1934) thought is sometimes used by conventional economists as a demarcation criterion while his later (1943) chapter on how capitalism's "process of creative destruction" was "evolutionary" germinated the sprouts of the evolutionary school. It has been pointed out that the irony arises from the fact Schumpeter himself edged away from Marshall's view of technical change as continuous, later believing that technological change often occurred in discrete, revolutionary bursts [Moss, 1982, p. 3].

The diversity of innovation (according to Schumpeter) heightens the irony that some economists would prevent the consideration of discrete or bursty "inventive" technological changes in favor of mathematically well-behaved, continuous "innovation". Rensman notes (1996) that Schumpeter enumerated five kinds of technological innovations:

1) a new good or new quality of good, 2) a new method of production, 3) opening of a new market, 4) discovery of new resources or intermediates, and 5) a new organizational form. [Rensman, 1996, p. 1]

Some of these can hardly be considered continuous phenomena.

To bring invention into the domain of economics along with innovation, "new" economists argue that there are six kinds of technological inventions: the five innovations mentioned above, plus basic or theoretical research. The first five are goal-oriented activities, with clear economic incentives. Both self-employed individual inventors (the rule in Marshall's day) and organizationally employed R&D teams (part of the modern R&D function) engage in such goal-oriented invention. However, the sixth, pure theoretical research is often undertaken with an epistemic value or evolutionary organizational value in mind rather than (or in addition to) conventional objectives. The tendency to exclude "inventive" production from economics ignores the employment of inputs in the production of research and invention and ignores how Schumpeter defined technological innovation.

On this basis, Perrin (1990) suggests that there is a difference between "pre-adoption" and "post-adoption" research into the economics of technology. Homer-Dixon adds that the economic-technical distinction includes ingenuity (the generation of practical ideas) and the dissemination of productive ideas [Homer-Dixon, 1995, p. 587]. Thus, part of the rationale behind an economic-technical distinction in production aligns with Schumpeter's early invention-innovation idea and part relies on the technical and inventive stages within an overall process from idea generation to dissemination.

A second way understanding the economic-technical distinction improves the consistency of economics with the information economy is by highlighting the apparent fragility of two notions. First, economic efficiency requires mathematical determinism and integrability. Second, economic efficiency always guarantees technical efficiency. A deterministic worldview, if enforced through well-behaved primal or dual technologies, can rule out entire classes of economically relevant technologies. When economic efficiency is defined solely in neat, tautological fashion by the mathematics of static, structurally fixed markets, (where identical firms maximize profits or minimize costs given well-behaved single commodity output production functions and perfect certainty), it implies technical efficiency. Indeed, the generality and elegance of the duality approach simplify empirical work while providing what Silberberg calls theoretical "soundness" [Silberberg, 1990, p. 285]. However, as Pope and others have recognized, "not all problems seem to be capable of being studied using duality" [Pope, 1982, p. 350].

Multi-product production, non-conventional inputs, network effects, and ever-shorter decision periods (all a result of new technologies) can make economics appear inconsistent with the times. The evolutionary and path dependent agendas question whether standard production and cost functions are useful in analyzing many kinds of technological change to begin with. They contend that economic concepts of the firm and production need to be broad enough mathematically and technically to acknowledge the changes that IT, hypercommunications, and biotechnology bring. This is underscored by Williamson: "The firm as production function needs to make way for the firm as governance structure if the ramifications of internal organization are to be accurately assessed" [Williamson, 1981, p. 1539; quoted by Cotterill, 1987, p. 107].

However, because of advances in computer technology, conventional mathematical constructs and their successors may become even more useful in understanding core technical-economic distinctions. Even if duality and conventional mathematical economics are passé, as some "new" economists argue, [Kelly, 1994] a theoretical cocoon with equal or greater elegance would seem necessary to replace them. Until this occurs, modifications in the general idea of a production function (and optimization of profit, cost, or something else) may give economics a view of the economic-technical distinction that better reflects non-conventional technologies and network effects. Indeed, along with new views of the firms and supply chains that constitute markets, mathematical extensions of conventional economic models may be more important than ever. In 1963, Morgenstern saw the importance to economic problem solving that technologies of the "new" economy (computers and combinatorial economic software) would have. He noted that technology itself would "continuously generate new problems of a mathematical nature" to bring the unlimited penetration of mathematics into economics. This led him to note the impossibility of "any 'limits' to the use of mathematics" (in economics) [Morgenstern, 1963, p. 29].

A third way that understanding the economic-technical distinction makes economics more consistent with the information economy is by revealing a broader view of efficiency than that of the economist or engineer alone. According to Kevin Kelly, Peter Drucker has argued that the productivity problem for each worker in the industrial economy was how to do his job right (most efficiently). Kelly argues that in the new economy, "productivity is the wrong thing to care about" because the "task for each worker is not 'how to do his job right', but 'what is the right job to do?' " [Kelly, 1997, p. 14]. Evolutionary and path dependent theorists would argue that standard economic theory (from which production functions are taken) is so simplistic that "it does violence to reality" [Arthur, 1990, p. 92]. Under Kelly's twelfth rule for the new economy, the law of inefficiencies, he writes: "Wasting time and being inefficient are the way to discovery" [Kelly, 1997, p. 14].

Yet new ways of looking at technology within economics are (as Arthur notes when discussing increasing returns) "not intended to destroy the standard theory", but to "complement it" [Arthur, 1996, p. 3]. The decision rules and routines of the evolutionary theorists "are close conceptual relatives of production 'techniques'" [Nelson and Winter, 1982, p. 14]. In an e-business setting, such as amazon.com for example, the routines that Nelson and Winter say "replace the production function" presumably include such technological efficiencies as high-tech commerce servers and software [Nelson and Winter, 1982, p. 14]. Amazon.com may be a highly efficient firm in a technical sense, but as of January 2000, it has yet to make a profit. The question of whether a firm or industry can be economically but not technically efficient requires a modified view of production that considers the technical-economic distinction with a broad view of efficiency.

2.3.2.2 Conventional production: simple technology, MPP, MRTS

The simplicity of the conventional production function enables understandable first-order comparisons of technical and economic efficiency to be made. In the conventional perspective, improved technology helps a producer to produce more output with the same amount of inputs to become more technically efficient. It is another matter to say whether firms that adopt progressive technologies are also economically efficient. Each kind of efficiency can be thought of as a matter of degree in applied work where "revealed efficiency" is used to compare "overall" efficiencies among firms [Paris, 1991, pp. 287-306].

The production function "identifies the maximum quantity of a commodity that can be produced per time period by each specific combination of inputs" [Browning and Browning, 1989, p. 168, italics mine]. The concept of decision period or length of run is central to the analysis. Using Hicks' two inputs, capital (K) and labor (L), Persky names four decision periods (VSR, SR, LR, VLR) that are important to engineers, economists, and agribusinesses alike:

In the very short run (or market period), the quantities of both inputs are fixed. In other words, K and L are both parameters.

In the short run, the quantity of one input is fixed, and the quantity of the other input can be varied. In other words, either K is a parameter or L is a parameter. We usually take K as the parameter.

In the long run, the quantities of both inputs can be varied.

In the very long run, the quantities of both inputs can be varied, and the production function can change. This case represents technological improvement. [Persky, 1983, p. 146, emphasis in original]

The evolutionary and path dependent schools would debate Persky's assessment that the VLR is the only place technological change can occur.

Additionally, in a weightless information and knowledge economy, no VSR exists in some cases because the production plan varies by the minute. Later in the chapter (2.5), the concept of time compression due to technological progress in hypercommunication and information processing will be discussed. As will be emphasized then, IT is responsible for greater flexibility in varying inputs as well as for increasingly shorter decision periods. Therefore, the cases of VSR and VLR are less apart in time than they once were. Not incidentally, inexpensive high-speed hypercommunications allow the instantaneous transmission of large amounts of information to anywhere reached by a high-speed infrastructure, permitting (but not guaranteeing) faster planning and decision making.

It might appear as though production agriculture might not benefit from time compression due to new technology the way other industries do. Crop seasons and animal cycles create relatively inflexible decision periods. However, intellectual property, communication, and information are weightless inputs whose share makes up an increasing proportion of total factor mix in agriculture. Indeed, biotechnologies and high-tech crop monitoring allow even production agriculture to achieve an unprecedented degree of control and flexibility over factors. Furthermore, technologies are appearing that are able to time compress agricultural production processes as well. One example is the case of biotech "super pigs" that grow forty percent faster to larger weights than before, using less feed and other inputs while decreasing piglet death rates [Hardin, 1999; Associated Press, 12/7/99].

Consider how the conventional production function could vary due to a new technology. Graphically, the technical effect on production of a technological change can easily be shown through the production function in the single output, single input case. Figure 2-4 shows six cases of how a new technology can alter the technical side of production. Depicted is a generalized monoperiodic production function. Assume that the firm has perfect certainty as it compares the old and new production processes before the next production period.

Figure 2-4: Six cases of technology's effect on production, one variable input example.

The baseline for each case is a thin line (representing an identical base production function) ending with an arrow. In each case, a new technology's influence on production is shown by a bold line. Without more information than the production function alone, the probable economic behavior of a firm cannot be established. Assume that the only change between cases is technology as embodied in the production function.

Case one shows a new production function that is a parallel shift of production under the old technology. The MPPs (Marginal Physical Products) are identical and the only difference is a constant additional quantity of output at all input levels resulting from the new technology. By converting to the new technology, more units of output will be produced for the same amount of input, regardless of whether a small amount of input or a large amount is used. The fixed quantity of output produced by the new technology appears to be scale neutral (does not depend on how much input is used).

Case two depicts a new technology that produces less output for a given amount of input up until a point. After that point, more can be produced by the new technology than under the old one. Case two favors larger scales of operation. In case three the new technology is always and everywhere able to produce more output (given the same amount of input) than under the old technology. Case three is scale neutral, favoring adoption of the new technology at any scale of operation.

In case four the new technology always and everywhere produces less output given the same amount of input than under the old technology. Economically, it is hard to imagine a firm that would replace an old technology with a new technology that always produced less. The fifth case depicts a new technology that is better than the old technology only at a relatively large operational scale. Note that the new technology underperforms the old one until the MPP of the old is approximately zero, where the output of the new technology races above the old one. Case six shows that adoption of the new technology holds only for lower levels of inputs. Once the MPP of the old technology approaches zero, the two production functions are virtually identical. In this case, the new technology would tend to favor a smaller scale than the old one.

A slightly more general mathematical treatment brings in two inputs, so that production with the new technology may be better compared to the base-case production function. Now, four core concepts of technical production are considered: technical substitutability among factors (2.3.2.3), technical factor interdependence and separability (2.3.2.4), average and marginal returns to scale (2.3.2.5), and returns to scope (2.3.2.6).

2.3.2.3 Technical factor substitutability

The effect of a technical change on factor substitutability is an important technical aspect of production. However, it can be measured in many ways. Beattie and Taylor admit that factor substitutability can be a misnomer. "Economists' use of the term, factor substitutability, to refer to isoquant patterns is a bit unfortunate--it only means that isoquants are convex to the origin" [Beattie and Taylor, 1985, p. 29]. Convexity can result from chemical interaction or other synergies (such as network externalities) between factors so that it is not necessarily true that one factor actually "substitutes" for another.

Technical factor substitutability hinges on the relationship among inputs needed to produce a particular output level. Consider a two input production function , let x1 and x2 be physical units of a factor of production. The MPP of x1 is simply the first partial derivative, , likewise the MPP of x2 is . The MRTS of x1 for x2 (MRTS12) is derived from the total differential of the production function, . If dy = 0 as it must along an isoquant, then . The slope of any isoquant is given by . The MRTS12 is the absolute value of the ratio of MPPs, . The MRTS12 tells how many more units of x1 would be needed to hold output constant, while taking away one unit of x2. The MRTS and isoquant curvature are related to the technical substitutability of factors as Figure 2-5 illustrates.

Technological change can be brought in to the production function more explicitly. In a conventional approach a three input production function, could use T as time (a proxy for technological change). A positive rate of technical change given by would assume that all changes in output over time that are not the result of varying x1 or x2 are due to progressive technical change. However, as Solow points out, ". . . there is no reason why T should not change by discrete jumps, or from place to place, or from entrepreneur to entrepreneur" [Solow, 1967, p. 28]. This specification implicitly treats technological change as an exogenous residual.

Figure 2-5: Isoquant curvature and technical factor substitutability.

Alternatively, a production function could include inputs of a particular technology as explicit endogenous factors rather than as an exogenous technological change residual. The path dependent, evolutionary, and endogenous growth schools use such approaches to model non-conventional technological inputs such as information, knowledge, or communication in a production process. The idea is that technology can change along with other inputs so that the VLR-LR distinction is different from the conventional case. The approach is best applied to multi-output cases.

T could be physical units (or a vector of parameters) representing a technology. Then, a particular technology's effect on other inputs could be evaluated individually and jointly. However, the generality of such an alternative view of the production function can violate conventional assumptions such as perfect certainty, continuous differentiability, monotonicity, and concavity. Relaxing these assumptions can lead to intractable empirical results.

No graph can show every way that technical factor substitutability can be altered by technological change. This is since, as Chambers notes, "there is no correct answer or measure of the degree of substitutability between any two inputs" [Chambers, 1984, Ch.1, p. 18]. In an n-input case, the problem concerns the percentage change in input xi that would result from a one- percent change in the amount of input xj.

Several unit-free measures of the degree of substitutability among inputs exist including the direct elasticity of substitution, the Allen partial elasticity of substitution, the Samuelson total elasticity of substitution, and the Morishima elasticity of substitution. The direct elasticity is a short-run measure of substitutability between input i with input j, holding all other inputs fixed, while the Samulelson elasticity measures the substitutability of input i against all other inputs. The Allen elasticity of substitution is a measure of relative input change along an isoquant since it measures the elasticity of the ratio (x1/x2) with respect to the MRTS12 with output held constant. Diverse elasticities of substitution complicate comparisons of how a technological change affects input substitutability.

2.3.2.4 Technical factor interdependence, separability

The next technical concept of production, technical factor interdependence, is another way technological change can influence production. Economists often do not differentiate between technical factor substitutability (where output is held constant) and technical factor interdependence (where output is not held constant). Beattie and Taylor state that "Two factors are technically independent if the MPP of one is not altered as the quantity of the other is changed" [Beattie and Taylor, 1985, pp. 32-33]. In the two input case there are three technical factor interdependencies given the production function . Each depends on continuous second cross partial derivatives as follows:

Factors are technically independent when f12= 0, so that the MPP of one factor is not affected by changes in the physical amount of the other. The technically complementary case occurs if the cross partial f12 > 0. In the complementary case, increasing x1 raises the MPP of x2. The case of technically competitive inputs occurs when f12 < 0. There, increasing x1 reduces the MPP productivity of x2.

The technological change could be biased toward one factor or be Hicks neutral. In the (Hicksian) factor neutral case, a new technology leads to increased output without changing factor proportions. If the MRTS12 is independent of T, then technical change is factor-neutral. Isoquants retain the same curvature and position, but represent larger values over time because both factors in the same combination are able to produce more output. Hicksian factor-neutral technical change requires that, given a fixed factor proportion, . In other words, T is separable from both x1 and x2 in production, . With Hicks-biased technical change, a non-homothetic isoquant shift occurs so that technology will change the relative contribution of inputs to production. Isoquant curvature remains the same, but positions differ. This will be covered further during the discussion of the economics of production and factor neutrality (2.3.3.2).

If the efficiency or effectiveness of a particular input alone increases with T, the technical change is said to be factor augmenting. Rather than consider if technical change alters factor proportions, this approach looks at how technological changes alter the elasticity of substitution and differential rates of input quality. For example, the education of farmers may be land augmenting if land in the hand of an educated farmer is capable of wider technical possibilities. The technological change (due to education, but measured by the passage of time) becomes embodied in land but not in other inputs. In such a case, a measure of the rate of technical change can be decomposed into the individual rates of change per input (a scale expansion effect) and a pure shift effect [Chambers, 1984, Ch.5, p. 7]. This is shown graphically in Figure 2-12 in (2.3.3.2).

Technical factor interdependence is important to the tenets of network economics because of the inherent complementarity of network hardware and software. The path dependent school is based on the technical independence of competing pathways of technological development. Chapter 3 will cover economic and technical foundations of communications networks in further detail.

2.3.2.5 Returns to scale and scale neutrality of technology

Returns to scale (isoquant spacing) is the third technical aspect of production. Returns to scale can be global or local, and be constant, increasing, or decreasing. According to Silberberg, "constant returns to scale "means that if each factor is increased by the same proportion, output will increase by a like proportion" [Silberberg, 1990, p. 93] Mathematically, given the two input production function and a constant proportion t, constant returns to scale occur when . Increasing returns to scale occur when , while decreasing returns to scale occur when .

Figure 2-6 compares increasing and decreasing returns to scale. Isoquants become more closely spaced as inputs are proportionally increased under decreasing returns to scale. Under increasing returns to scale, isoquants become spaced farther apart as inputs are proportionally increased.

Figure 2-6: Returns to scale: decreasing (right) and increasing (left).

Often, an important adjective (such as marginal or average) is left out of the description, leaving a certain "untidiness" in the use of the term "returns to scale" [Beattie and Taylor, 1985, p. 47]. The existence of a distinction between marginal and average returns to scale depends on whether the function is homogenous or non-homogenous, or (stated another way) fixed proportions or variable proportions. Consider the production function . Suppose again that each factor is increased by a like proportion, t. The function is homogenous of degree r if where r is a constant and t any positive real number.

Homogenous functions (of which r=1 is the special case of linearly homogenous) will not require any distinction between marginal and average returns to scale because "both average and marginal returns everywhere increase, everywhere decrease, or everywhere remain constant" [Beattie and Taylor, 1985, p. 47]. All such homogenous functions exhibit constant proportional returns so that the distinction between average and marginal returns to scale is unnecessary. The degree of homogeneity, r, shows the returns to scale that the function exhibits so that if r >1 increasing returns to scale exist, r <1 decreasing returns to scale exist, and r =1 constant returns to scale exist. Technologies are scale neutral if constant returns to scale exist, larger scales are favored if increasing returns to scale exist, and smaller scales occur under decreasing returns to scale. Just as in the single input case discussed along with Figure 2-4, such scale economies depend on economic variables and function homotheticity before their economic implications can be considered.

Returns to scale and non-homotheticity are important to "new" economists, network economics, and the path dependent view. One way homotheticity can arise is through non-conventional, non-rival inputs such as technical designs. Romer provides a classic example:

if F(A,X) represents a production process that depends on [an exhaustive list of] rival inputs X and non-rival inputs A, then by a replication argument, it follows that F(A, ?X) = ?F(A,X). . . . If A is productive as well, it follows that F cannot be a concave function because F(?A, ?X) > ?F(A,X). [Romer, 1990, p. S76]

Romer's original focus was on the macroeconomy, so the argument "neglects integer problems that may be relevant for a firm that gets stuck between n and n+1 plants", but it is motivated by a simple example [Romer, 1990, p. S76]. A firm can invest 10,000 hours of time to produce an engineering design (non-rival input) for a 20MB disk drive. If it builds a $10 million factory and hires 100 employees, it can produce 100,000 drives (two trillion MB) of storage per year. By replicating the rival inputs (factory and employees), the output doubles to reach four trillion MB. If the firm had used 20,000 hours of engineering time (which would have lead to a design for a 30MB disk drive) that could be made by the same factory and workers. If it doubles all inputs, the 20,000-hour design, two factories, and 200 employees produce 6 trillion MB, three times the original amount [Romer, 1990, pp. 254-255].

Examples such as this are used (along with arguments about network effects) to suggest modifications to production economics that would make it more consistent with the technologies and organizations of the information economy. The picture is incomplete when erroneous conclusions are drawn about the technical or economic implications (alone or together) of technology. More discussion will be found in sub-section 2.3.3 (when returns to scale and returns to size are compared), as well as in Chapter 3 when network economics is covered.

2.3.2.6 Returns to scope

The theory of multiple product production offers another concept, returns to scope, that is important to technology's technical role in production as well. Factors in multiple product production are of two types: allocable or non-allocable. In the allocable case, units used to produce one product are distinguishable from units of the same factor used to produce another product. In the non-allocable case, the amount of each factor used to produce different products cannot be separated. Multiple products may be produced jointly (perhaps as by-products such as beef and hides) or non-jointly. Beattie and Taylor [1985, pp. 179-184] and Henderson and Quandt [1980, pp. 92-103] present basic treatments, while Farë, 1995 has a detailed discussion.

When it is technically more efficient to produce two or more products together, a technology that offers increasing returns to scope would be one where doubling inputs would yield more than double the outputs. In a multi-product agribusiness, it is not difficult to imagine either hypercommunications or information as examples of non-allocable factors (or indirect factors) for which increasing returns to scope would be theoretically likely. Hypercommunication technologies enable multiple products to use the infrastructure that previously delivered a single product, so that overall production becomes technically and economically more efficient. The ability to place a fax machine on a telephone line shared with voice calls is an early example. A more recent example is that of ADSL (Asymmetric Digital Subscriber Line) services. The same copper telephone line is conditioned so that Internet browsing, telephoning, and faxing occur simultaneously through more efficient packing of each kind of signal in the wire circuit.

2.3.3 Technology and Production: Four Topics in the Economics of Production

Until now, emphasis has been on the first role played by technology in the information economy, the technical aspects of production without considering the second role, the economics of production. Before considering five topics in the economics of production, it is important to understand how new technologies have created a new economics. Since new versions of theory are created at a slower rate than new versions of high technology are, the received view in economics is itself under pressure to keep up. Such technological pressures mean that the lagging economic principles of production can be overshadowed by headlines of new technological possibilities. A technical, economics-free view of new technology has become a fashionable norm in popular discussions and shoestring analyses of the high-tech businesses and industries that make up the information economy.

In agribusiness as well, production's technical side may overshadow the economic side for several reasons. First, new technologies seem to be introduced almost daily, changing almost every aspect of distribution, sales, and processing-production. These include direct biotech inputs such as new genetic seeds and disease-resistant plant varieties as well as new product technologies that create new outputs such as smart foods (nutraceuticals). Indirect inputs such as IT and hypercommunications are another class of technologies, designed to help agribusinesses market, manage, or finance operations better. The rapidity of technological progress makes it hard for management to track new technologies that will best help a particular operation, necessitating improved communications and highly trained technical staff. Second, it is hard for managers to follow (much less to filter) the claims of technically trained purveyors of new technology. This can cause either the rejection of a worthy and profitable new technology or the acceptance of an unnecessary and costly one due to managerial discomfort with the technical dimensions. Either way, the decision may be based more on what competitors do than on economic analysis. Third, public perceptions of possible dangers of new agricultural technologies (especially biotechnologies) must also be understood.

Importantly, a new technology may be misconstrued by economists because of their own unfamiliarity with the scientific and technical details, or because conventional economic theory may not admit the existence of that technology. Unless a particular technology is "well-behaved" (consists of an axiomatically constructed production possibility set and separable input requirement set), conventional economic theory cannot do its best job of analysis. According to the evolutionary and path dependent schools, conventional economics is ignoring the technologies of the information economy.

This tendency of economists to restrict technologies within well-behaved boundaries may be especially true for many technologies involved in the information agribusiness economy. IT, biotechnology, and hypercommunication technologies are often considered poorly behaved. In 1985, Business Week called economists an "endangered species" because their abstract views of technology omit scientific and business reality. Giarini and Stahel support this, saying: "This is highlighted in e.g. the definition of economists as people who look at something in practice and ask if it will work in theory" [Giarini and Stahel, 1989, p. 121].

However, as the four topics in this sub-section will show, economics is reasonably robust to rule breaking technologies. This observation assumes economists will avoid the mistake of trusting in a "current economic model" that "refers to scientific principles which science has long ago abandoned" [Giarini and Stahel, 1992, p. 5]. The emergence of a new definition of returns to scale from network economics along with work on non-conventional factors and nascent production technologies in the evolutionary school [Auerswald et al., 1998] form the basis for the modified approach.

2.3.3.1 Returns to size, size neutrality of technological change

The difference between returns to scale and returns to size is at the core of the technical-economic distinction within production. The difference (when applicable) will establish returns to size as an economic outcome of technological change in production. It is also an important reason the four research agendas used by economists to study technological change disagree. Beattie and Taylor explain the difference:

Returns to size, unlike returns to scale, is not exclusively a technical issue: return to scale has to do with expansion of output in response to an expansion of all factors in fixed proportion--a movement along a factor beam. Returns to size has to do with a proportional change in output as factors are expanded in least-cost or expansion path proportions . . . only for the special case of constant proportional returns will the expansion path be a factor beam. Since combinations of factors in least-cost proportion are assumed in deriving the cost functions of the firm, it is actually this concept rather than that of returns to scale that is most important in the theory of firm growth or optimum firm size. Return to size has to do with the economic notion of what is happening (decreasing, constant, or increasing) to costs (average variable cost and marginal cost) as output is expanded . . . the terminology is sometimes inappropriately used interchangeably. [Beattie and Taylor, 1985, p. 52-53]

The expansion path shows "the long-run increases in . . . operations made possible by increases in the budget" under cost minimization [Mahanty, 1980, p. 181]. The output expansion path comes from simultaneously solving the first order conditions of cost minimization and then solving the relation for xc2, the cost minimizing level of x2. Figure 2-7 is one way to illustrate how returns to scale and size can differ by showing two kinds of output expansion paths.

Figure 2-7: Returns to scale and returns to size differ when (as shown in the right panel) a technology does not exhibit a linear output expansion path

The left panel of Figure 2-7 depicts a homogenous production function with constant returns to scale, where a ray from the origin is both output expansion path and factor beam. Along the ray, the proportion contributed to total product by each factor (factor share) is constant. Simultaneously, as output rises from y0 to y1 to y2, this constant proportion also represents the cost minimizing input bundle. Though the non-homothetic production function in the right panel of Figure 2-7 also exhibits constant returns to scale, the cost minimizing input bundle differs from a constant proportion ray (factor beam) as output is increased. The proportion a cost minimizing firm will allocate to factor two is greater at output y0 than at y2. Stated differently, the MRTS in the neighborhood of the optimum varies as output varies. A non-homothetic technology can have increasing, decreasing, or constant returns to scale (globally or locally) and have either a linear or non-linear output expansion path.

With a homothetic technology, a particular return to scale (shown by isoquant spacing) automatically implies a particular return to size (shown by average unit cost as output rises). This mapping from technical to economic is shown in Figures 2-8 through 2-10. Figure 2-8 shows decreasing returns to scale on the left. Since a homothetic technology produced it, the economic implication is a diseconomy of size (decreasing returns to size) as shown in the right panel. Note that the LRAC (Long Run Average Cost) increases as output increases, meaning that the technology favors a smaller size of operation.

Figure 2-8: For homothetic technologies, decreasing returns to scale implies diseconomies of size

The SRAC (Short Run Average Cost) reaches a minimum at approximately two units, suggesting an optimal SR operational size for a cost minimizer. If a new homothetic technology with decreasing returns to scale moved isoquants inward, the economic implications would be seen through a downward movement of LRAC and a downward (rightward) movement of SRAC.

In Figure 2-9, constant returns to scale implies a constant LRAC (constant returns to size), so that the technology is neutral as to size. However, SRAC is minimized at approximately five units suggesting an optimal least cost size of operation in the SR. In Figure 2-10, increasing returns to scale implies economies of scale and, therefore, increasing returns to size. Both SRAC and LRAC continue to fall as output increases.

Figure 2-9: For homothetic technologies, constant returns to scale implies constant returns to size.
Figure 2-10: For homothetic technologies, increasing returns to scale implies economies of size.

A before and after comparison of a homothetic technological change for either technology from Figure 2-9 or 2-10 could be made (using either economic cost or technical production data). The result could suggest whether technological change was biased as to size (or equivalently, scale). The beauty of the economic dual approach is that by using only the cost data, a before and after technology comparison could be made.

However, returns to scale can be increasing, decreasing, or constant for non-homothetic production functions as well. While non-homothetic technologies may exhibit isoquant spacing similar to the left panels of Figures 2-8 through 2-10, they do not imply the same mapping of economic returns to size. The term increasing returns to scale is used sometimes in network economic literature as a synonym for increasing returns to size; even the assumption of automatic homotheticity is not made for biotechnologies, information technologies, and service producing technologies. This creates a problem if economists hope to re-create the technology using the cost function data or if production engineers wish to understand the economic implications of a non-homothetic technology. The implications of non-homothetic technologies on returns to size will be revisited in 2.3.3.4 with a graphical example.

2.3.3.2 Factor neutrality, factor augmentation, and technical change

There are two other economic effects of technology in production. The first, factor neutrality, assesses whether a technological change is biased in favor of a particular factor or factors or is instead Hicks neutral. Here, the question centers on whether technical change alters relative factor shares through the MRTS [Nadiri, 1970]. Then there is the question of whether the new technology is also (or instead) factor augmenting, where the efficiency or effectiveness of a particular input alone increases due to technological change. Rather than considering if the MRTS is independent of technical change, this approach looks at how technological changes affect the elasticity of substitution and differential rates of embodiment (changes in input quality). These ideas were originally brought up as technical effects of production in 2.3.2.4 during the discussion of technical factor interdependence and separability.

Changes in factor shares to due to changes in a homothetic technology can be shown by graphically comparing output expansion paths. Figure 2-11 shows how a technological change alters isoquant position and the optimal amount of each factor hired given a constant budget. In each case in Figure 2-11, a new technology has led to a new output Q2 that exceeds the output Q1 under the old technology. The new, higher Q2 isoquant is shown superimposed on the old, lower Q1 isoquant. Each panel differs by how technological change affects the producer's optimal input ratio, which is constant for all output levels.

Figure 2-11: Output expansion paths compared among labor saving, capital saving, and neutral technologies

The left-hand panel of Figure 2-11 shows a capital using (labor saving) innovation increasing the optimal input ratio (K/R) so that an increase in output (Q2Q1) can occur with the same budget. More capital is used than before the new technology, but less labor. The right-hand panel shows that a labor using (capital saving) innovation decreases (K/R). Less capital is hired but more labor used in the new technology than the old one, but output is again higher for the same budget. In the center panel, output rises, but the new technology has the same optimal ratio. In this case, the new technology is factor neutral.

Normally, changes in equilibria can be demonstrated through comparative statics. However, the efficacy of comparative statics depends on model specifications, and whether technological changes occur gradually and continuously or suddenly and discretely. As an example, technological change can be decomposed graphically (in a simple case) as shown in Figure 2-12.

Figure 2-12: Graphical decomposition of the effects of technological change.

The before technological change panel, on the left, shows three isoquants (y0 < y1 < y2) and two isocost lines B1 < B2 and a solid linear output expansion path. In the left panel, the firm can afford B2 and wishes to minimize the cost of production, so that it ends up producing y1 units of output at point a. The right panel depicts a factor augmenting technological change. It also has isocosts B1 and B2 (since prices are assumed fixed), but a new output expansion path and a new set of isoquants (the new y1 equals old y1 in output units). The old output expansion path and isoquants are shown as dotted lines, establishing the location of point a. Note that unlike Figure 2-11, both isoquant curvature and position have been altered by the technical change in Figure 2-12.

An important question is precisely how the firm's behavior might change due to the technological change. The problem could be defined as how the firm moves from point a to point z, the new cost minimizing bundle. It is helpful to understand what has (and has not) changed in Figure 2-12 due to the technical change. The first change is that the input combinations that create a given level of output depicted by the isoquant positions have changed though the quantity levels have not. The second change is in the curvature of the isoquants. The third change is that the MRTS1212 has risen, meaning that it takes more units of x1 to hold output constant (while taking away one unit of x2) than it did before the technical change. While prices and isocost curves are identical, it is as if there has been a fourth change (from y0 to y1) because the firm could maintain the original output level at y1 and pocket the savings [Nadiri, 1970].

The order in which the changes might happen is an entirely different situation. Movement from a to point z represents the change in output level and isoquant position, the change in isoquant curvature, and the change in MRTS12 (constant isocost). Movement from a to point g shows the changes in isocost, isoquant curvature, and MRTS12 (constant output). Movement from g to z shows the change in output and isocost, with constant MRTS and isoquant curvature. The change in MRTS is called a bias or shift effect. The change in output (from isoquants of same curvature to another) is called an output effect, and movement from one isoquant curvature to another while keeping the same output level) is called a scale expansion effect.

2.3.3.3 Economic factor interdependence

A fifth topic in the economic aspects of technology in production is the economic interdependence of factors. Beattie and Taylor (1985) distinguish the economic interdependence of factors from the technical interdependence of factors. Recall that in the discussion of technical factor interdependence, output was not held constant. Now, under economic factor interdependence, the concern is what happens to the amount of a factor hired as the prices of other factors change, also when output is not held constant. "A factor of production x1, is economically complementary to, independent of, or competitive with another factor x2, if an increase in r2 decreases, does not affect, or increases the use of x1" [Beattie and Taylor, 1985, p. 123].

Consider a single output case with two factors of production. Suppose a factor demand for the input x1 is calculated from simultaneous solution of the first-order conditions for profit maximization. The factor demand equation which satisfied the first-order and second-order conditions for profit maximization might be x1* = x1*(r1, r2, p). Where x1* is the profit maximizing amount of factor 1 hired, r1is the input price of x1 r2 is the input price of x2, and p is the output price.

That is if then x1 and x2 are economically complementary. If instead, then x1and x2 are economically independent, while if , then x1and x2 are economically competitive. Demand elasticities or the derivatives themselves could be compared before and after the introduction of a new technology. The comparison could reveal essential economic effects of the new technology without the precise form of the production function.

The examples from Figures 2-11 and 2-12 could be altered to show the economic effect of a new technology in three ways. The effect of a new technology could be direct, indirect, or both from the firm's viewpoint. Technology could directly change the production surface or technology could indirectly change the relative prices of the inputs (and hence the amounts of factors hired), or both.

In the first case, a factor biased, factor augmenting, or joint effect could directly change the shape of the firm's production function. In the second case, the least cost input combination (but not proportion) needed to produce a fixed output level could change indirectly due to a technological change. This could arise from changes in the relative prices of the inputs because of their use with the new technology in other industries and firms.

Assuming the economics adequately capture the technology, the economic approach provides better information than a purely technical approach does about the role of technology in information economy production. However, the economic implications of new technologies could be missed if economists misdiagnose a new technology because the technical aspects are either not understood or inadmissible. However, a review of the literature suggests this problem is being aggressively pursued through modifications in the economics of production (covered next) and through broader managerial economics approaches covered in 2.3.4.

2.3.3.4 Modifications to production economics

One view of conventional theory is that the economics of production drives the technology of production. By obtaining costs as a function of output level and input prices, it is often assumed that the indirect production function may be recovered. This is mathematically true of homothetic technologies and useful in empirical analyses, but alternative schools argue that it has become a direction of causality instead of an approach to measurement.

Alternative research agendas seek to whittle modifications into the economics of production using technologies of the information economy, rather than jam the square pegs of non-conventional inputs, outputs, and firms into round holes left for them by industrial age economics. Under alternative approaches, many problems in production economics depend on the technology of production more than technology depends on economics.

Three examples of how new technologies are driving modifications in the economics of production are discussed. First, six general modifications to the economics of production proposed by alternative schools are examined. Second, a modified definition of increasing returns to scale from the network economics field is introduced and analyzed. Third, these two areas are followed by a graphical example of a non-homothetic technology.

First, new technology is driving modifications in the economics of production so it fits the information age rather than the industrial age. Critics of standard economics argue that homotheticity (and other assumptions regarding production) are unnecessarily dogmatic. In support of this contention, they point out that the suffix -thetic means something "set forth dogmatically" [Webster's New World Dictionary, college ed., 1960, p. 1962]. Both evolutionary and path dependent theorists have used managerial economics, network economics, learning theory, and other disciplines as source material for a new, modified production economics with broader implications than homotheticity alone.

The new approach modifies existing theory in six ways. First, there are multiple goals of the modifications. These include providing "a microeconomic basis for explaining technological evolution", [Auerswald et al., 1998, p. 1] generating behavioral rules for the firm based on "bounded rationality", [Ballot and Taymaz, 1999] and predicting a process innovation path. Such modifications to the production function would allow the firm to invent new products and new inputs, innovate new production processes, mutate existing ones, and cross over from one production plan to another during ever shorter decision periods.

Second, the assumption of perfect certainty is relaxed in favor of a learning by doing [Arrow, 1962; Young, 1993] and a production recipes approach [Nelson and Phelps, 1966; Nelson and Winter, 1982]. Thus, memory, interaction, trail-and-error, and engineering experience are part of the process. The modified production function represents an evolutionary and dynamic process rather than a pre-determined, static outcome.

A third modification is that inputs can consist of non-conventional factors of production such as bio-genetic material, human capital and technological knowledge [Caballe and Santos, 1993] or even nascent (undiscovered) [Auerswald, et. al, 1998]. technologies. Technical and economic behavior of such non-conventional factors is modeled through mesoeconomic application of macro models to the micro firm and the use of engineering experience [Chenery, 1949; Smith 1961] or learning curves to model nascent technologies.

A fourth way that modifications to the economics of production differ from convention lies in the nature of the firm. The firm changes from a sole decision-maker producing a single undifferentiated commodity or manufactured good into an innovating governance structure that invents and produces multiple physical and "weightless" products and services. Giarini and Stahel link this concept to agribusiness in 1989:

But today, hardware tools and agricultural produce already account for a minor part (even if still a relatively large one) of the work actually done in producing wealth and welfare. Within the most traditional industries themselves, as within agriculture, service type functions predominate. . . . there is evidence of increasing returns of technology precisely in those activity areas which are typically post-industrial or are designated as service functions.

The foregoing is demonstrated in a dramatic way: agricultural overproduction in the U.S. and Europe contrasts with famine in Africa and Asia, because the major economic cost is not in producing these goods, but in storing, transporting, and delivering them. These services account for 80% of the final cost, hence agriculture has become primarily a service activity. [Giarini and Stahel, 1989, p. 1]

Firms use technologies, goods, services, and communication to produce composite services and goods, as well as to produce information and knowledge with which to become more efficient and innovative in the near and distant future. The form of the modified firm ranges from sole proprietorship, to corporation, to what Baarda (1999) calls a "transgenic firm": a loose amalgam of cooperating, semi-independent firms in a supply chain connected through a network of contracts. Hence, the objectives of the modified firm are welded to contractual obligations such as delivery date, quality, and the preservation of downward or upward property rights in the supply chain. These objectives may both internal and external to the firm. The degree of market power can vary by the output from perfect competition to monopolistic competition.

Fifth, whether the firm minimizes costs, maximizes profits or optimizes something else, observed market prices may not be the relative prices used to determine efficiency. This duality in pricing is because "the firm is a set of different subcoalitions each with their own objectives" in the evolutionary school [De Vries, 1999, p. ii]. For example, inter-firm or intra-firm externalities (based on risk taking, asymmetric information, or other factors) can allow a market-priced output expansion path to coexist with an output expansion path based on internalized costs and benefits. Under this view, Bill Gates knew that the source code for DOS was worth more than the few thousand dollars he paid for it as an input to produce MS-DOS. Hence, the firm can face several sets of input prices. First, it faces market prices that are assumed in theory to be fixed as production continues during the production period. Secondly, it faces shadow prices due both to externalities and internal externalities (or "intranalities") [Auerswald et al., 1998, p. 9] that may vary as the firm seeks the optimal production technology within a production period.

Sixth, and finally, along with traditional commodities and manufactured goods, outputs in the information economy can include services, information processing, corporate intelligence, bio-tech breakthroughs, or the transfer of data through networks. Many outputs (such as research and development or corporate intelligence) are not sold in the marketplace. Such outputs are hard to measure, but are important as both a stock and a flow. The firm uses the flow of such non-traditional outputs through the organization as catalysts for innovation during the decision period, and may see the stock of these outputs embodied in its market capitalization on each trading day.

In addition to the six modifications in production economics, new technologies modify the economics of production in a second way, seen through new definitions of "increasing returns to scale" from network economics. Economides defines increasing returns to scale as:

Increasing returns to scale exist when the cost per unit decreases as more units of the good are produced. Recently, the term 'increasing returns to scale' has been used to describe more generally a situation where the net value of the last produced unit [= (dollar value consumers are willing to pay for the last unit) - (average per unit cost of production)] increases with the number of units produced." [Economides, 1998, p. 1]

At first glance, this definition corresponds to the economic concept of returns to size rather than the technically based returns to scale.

However, new definitions of returns to scale are based on an alternative view of production and technology where, unlike the conventional definition, market power, penetration rate, and demand responses depend on the technology. For example, Arthur (1989) named several properties of increasing returns to scale technologies. Two of these are non-flexibility (short-run policy levers can create long-run lock-in due to switching costs and customer groove-in) and non-predictability ("small", random events are not averaged out or "forgotten" by dynamics and could decide the outcome). A third property, ergodicity (the opposite of path dependency) occurs when different sequences of events lead to a market outcome with probability of one. Due to these properties, increasing returns to scale technologies are not necessarily path efficient because network externalities accrue rapidly and at an increasing rate to the technology with the largest installed base and not necessarily to the most efficient technology. That is, they may or may not achieve private or social optima.

Ironically, by focusing on decreasing costs and increasing value, this generalized definition of increasing returns to scale (from one part of network economics) obscures the technological cause. The focus on costs conceals the technological causes of increasing returns to scale, given by others that "more and more, biological metaphors are useful economic metaphors" since "change in technical systems is becoming more biological" [Kelly, 1998, p. 114]. To these scholars, increasing returns to scale stems from the biology of human systems, to the dismay of Malthus. Much of "new" economics comes from the chaos-complexity theory of the Santa Fe Institute where fractals, complexity, and emergent computation are metaphors for how simple systems generate complex order spontaneously [Norton, 1999, p. 15]. At Santa Fe, increasing returns to scale could be rooted in anything from nonlinear nuclear physics to molecular biology and enzyme reactions.

Still others argue that the information age and "rising network society" are grounded in "network enterprises" and "the space of flows" [Castels, 1996]. This argument recognizes the detailed technical literature resulting from operations research and communications network engineering (see Chapter 3) as a technological source of increasing returns to scale. Instead of the deterministic models of economics, network engineers venture into probabilistic, variational, and combinatorial models of production where returns to scale, scope, and system occur. With such technical diversity, it is ironic that a widely used definition of increasing returns to scale appears to be based on value and cost rather than on revolutionary technical change.

The third example of how new technology is driving modifications in the economics of production can be shown graphically. Figure 2-13 displays a non-homothetic technology with decreasing returns to scale, a pseudo-isocost line, and a curved pseudo-expansion path. The term pseudo is applied to emphasize that the traditional cost minimization model (using observed market prices and single output production) is not exactly what is being used in the case at hand.

Figure 2-13: An homothetic technology in least-cost proportions

Suppose that in Figure 2-13, x1 represents information processing inputs and x2 represents communication inputs, with the single depicted output being the quantity of corporate intelligence (stock or flow). For non-homothetic technologies, technical and economic returns diverge, though not over all ranges of output or all input price ratios. The associated economic cost relation (a function of multiple output levels and a mix of market and shadow input prices) could show increasing returns to size (unit costs fall as output rises). The figure appears to show a traditional economic cost function as an explicit relationship between one output level and two input prices. However, it comes from a system of implicit functions of many outputs and a sizable vector of input prices, some of which cannot be held constant because of jointness in production and factor non-allocability.

Just as in the conventional case, Figure 2-13 is a simplification of reality. In the figure, the firm is attempting to hire the least cost input combination of information and communication. The units and pricing of these inputs are hard to define, but suppose they have been. According to results from both network economics and the evolutionary school, the isoquant pattern shown could result from biological or network hysteresis or a performance plateau effect for a particular set of interactive input technologies. Biological, genetic, and behavioral relationships between x1 and x2 could create memory or learning effects, or originate decay, temperature, chemical interactions, or other technological properties. Constraints include such limitations as attention and other implications from the economics of information (see section 2.4) that are not explicitly priced in the market.

However, such a technology (whether hypothetical or real) contains too many mysteries for both the economist and the logician to list. First, note the vertical dotted line running through point A, the cost minimizing bundle given the production budget and price ratios. If more units of x2 are hired, output falls instead of rising, implying that more is not better. However, in the neighborhood of A, if fewer units of x2 are hired, output also falls, implying that less is also not any better.

Second, there is the question of replication. For this type of technology to be logically possible, a firm could not hire 40 more units of x2 and x1 each to produce A in another location to yield a total of 2A from 80 units of each input. Some inputs would have to be non-rival. Non-homothetic functional forms have many other empirical connotations that worry economists. However, it is important to understand that the problem is framed differently than in the conventional case. Inputs are non-conventional and non-allocable, there are multiple outputs, and the behavioral objectives of the firm (or sub-firm decision making unit) are more complicated in pseudo cost minimization than they are under profit maximization or cost minimization.

The cost of information processing is highly dependent on information technologies, while the cost of communication depends less on the market price of bandwidth than it does on the interaction between the processing technology and human and organizational variables. The internal costs of communication to the firm depend on human variables such as attention and on organizational ones such as specialization, span of control and organizational depth. Costs and benefits can vary through the coordinated interaction of information processing and communication, but are based on an efficient network size, where returns to specialization outweigh all costs of communication.

The evolutionary school would emphasize that the production environment is an uncertain one, especially if the firm is viewed as an interdependent system that learns from its mistakes. Winter argues that the evolutionary model of the economics of production differs in four ways from the conventional or orthodox model. First, the evolutionary firm is a cooperative relationship "among the diverse economic interests organized in the firm" with bounded economic rationality rather than "a unitary actor" that is unboundedly rational [Winter, 1988, pp. 484-487, 493]. Second, evolutionary theory is more capable of explaining and predicting intra-firm organization and behavior than the conventional model which "provides no basis for explaining the organization of economic activity" [Winter, 1988, p. 488]. Third, "textbook orthodoxy fails to provide a basis for understanding the incentives and processes in business firms that produce technological and organizational change" [Winter, 1988, p. 491]. Fourth, boundaries of the firm and the hierarchical economics of managing [Radner, 1992] are always changing in the evolutionary view so that flexible growth based on successful routines is of central importance. As Winter sees it, in the evolutionary school, "the focus of explanatory efforts is on dynamics" [Winter, 1988, p. 492].

The example from Figure 2-13 of non-traditional, non-allocable inputs producing a single input (jointly produced with other outputs) illustrates the arguments of the evolutionary and path dependent schools that many technologies can be non-homothetic. These schools contend that such a relationship could easily arise in the n input case. Such non-homotheticity may be true only within a particular range of input or output levels, such as when an invention is first developed or an innovation first put in practice. Having a broader view of technology is an example of how new technology has modified the economics of production. However, as is discussed later in the context of measuring technical change, the new approaches are often impossible to implement empirically.

Alchian and Demsetz argue that a "metering problem", metering (measuring) input productivity and metering (controlling) rewards is "not confronted directly" in the conventional analysis of production. Instead, conventional analysis:

Tends to assume that . . . productivity automatically created its reward. We conjecture that the direction of causality is the reverse--the specific system of rewarding which is relied upon stimulates a particular productivity response. [Alchian and Demsetz, 1972, p. 777]

The modifications to the economics of production discussed in this section do not seem to provide answers to the metering problem. Instead, because the modifications are due to technological change and rely on the complicated interaction of observable market prices and un-observable externalities and intranalities, there are new unanswered questions to the metering problem than existed in the conventional approach.

According to network economists, superior approaches concentrate on the firm as a communications network [Bolton and Dewatripont, 1994] or rely on the theory of teams [Marschak and Radner, 1972, McGuire and Radner, 1986]. The firm seeks to "minimize costs of processing and communicating information" by reducing the cost of communication measured by "the time it takes for an agent to absorb new information sent by others" [Bolton and Dewatripont, 1994, p. 809]. These views of production as a communications network focus on the next topic, the managerial role played by technology.

2.3.4 Technology's Managerial Role: Efficiency, Flexibility, and Measurement

Both the conventional and modified economics of production and technical change are different perspectives of the technical-economic distinction. However, they are not necessarily applicable to the same economic problems. Technology's fourth role in the information economy is its use as a managerial tool. Viewed in this way, the technology and economics of production are tied to the manager's broad set of economic problems through the firm's "dominant managerial logic" [Afuah, 1998, pp.97-99].

Three forms of efficiency (2.3.4.1) are linked through the flexibility of the firm to sometimes forgotten dimensions of managerial economics (2.3.4.2) such as real and pecuniary economies, internal and external economies, and economies of system and scope. These "economies", in turn, are related to the more technical concepts of returns to system, scale, size, and scope. Proceeding in this manner shows that there is truth to Shapiro and Varian's claim that although technology changes, "Economic laws do not. If you are struggling to comprehend what the Internet means . . . you can learn a great deal from the advent of the telephone system a hundred years ago" [Shapiro and Varian, 1998, pp. 1-2]. Their point is not meant to exclude new technology by defending a narrow view of economics, but to define problems of innovation management clearly and to search economic for appropriate applications. Therefore, the measurement of technological change (2.3.4.3) depends on problem definition and the ability to apply a broad set of economic approaches (that have always been part of economics), in addition to introductory undergraduate microeconomics.

Innovation management uses managerial economics to map technological change onto the modern firm. The field of managerial economics is a loosely-defined collection of economic, finance, marketing, and management theory that is used to understand how firms operate and advise managers about how to operationalize economics in the business world. Results from microeconomics, industrial organization (IO), and production and distribution economics are used to analyze technical change along "incremental-radical" and "economic-organizational" axes [Afuah, 1998, p. 29]. Work in this area does not seek to prove or disprove the "economic laws" of Shapiro and Varian's academic world. Instead, innovation management combines management science and managerial economics to relate the managerial theme of technical innovation to the economic theme of profitability [Afuah, 1998].

The failure to distinguish among the managerial dimensions such as size, scope, system, span, and scale leads to confusion. Paris notes that "economies of scale" (whether real or pecuniary) is a "notion that is rather difficult to define and even more problematic to measure" [Paris, 1997, p. 303]. Even Baumol and Blinder (1991, p. 501) mention that "economies of scale" are "also referred to as increasing returns to scale", but they and other authors are actually describing economies of size by the more precise Beattie and Taylor definition. The net result of inaccurate problem definition and failure to consult a wide body of economics is that some economists study only a narrow set of technologies, drawing the ire of evolutionary and path dependent scholars. Imprecise meanings bleed into related concepts of internal and external "economies" that help illustrate how the evolutionary view builds upon existing managerial economics.

2.3.4.1 Technology and market efficiencies

The manager's problem depends on efficiency, in spite of Kelly's twelfth rule for the new economy, the law of inefficiencies [Kelly, 1997, p. 14]. Technology affects both the measurement and definition of efficiency. Ward (1987) outlines three kinds of efficiencies:

Internal efficiency is attained when firms manufacture and distribute their products using the minimum resources necessary. . . .

Allocative efficiency relates to the allocation of resources across markets. It occurs when the marginal conditions are met for all products across all markets. . . .

Finally, . . . dynamic efficiency . . . deals with the optimal allocation of resources over time. This aspect is particularly difficult to evaluate since by its very nature the optimal level is changing as technologies change. [Ward, 1987, p. 210]

When it comes to technological change, it is clearly possible for a firm that insists on making electric typewriters rather than word processing equipment to be internally efficient, but dynamically inefficient. This is the first challenge technological change makes to conventional economics: an efficiency measure for the firm that keeps up with (even anticipates) technological change in the environment. However, it is hardly a new, unanswered challenge.

Drucker's point that defining efficiency as doing the job well, rather than doing the right job can cause any of the three efficiencies to be inaccurate measures of "true" efficiency, regardless of how precisely they are measured empirically. This is a second challenge technological change makes to conventional economics: the accuracy of economic definitions of efficiency.

The third challenge technological change makes to the conventional view of efficiency relates to organizational form. A particular department in a corporation may have internal efficiency, but that does not guarantee that this will be transmitted throughout the entire organization. Even if the first two challenges are answered, efficiency of organization is essential to technical change. For example, a large firm may be unable to adjust internally to external technological change or make the technological changes internally it needs to alter external efficiency to reach its objectives.

Internal efficiency was already mentioned under the discussion of supply side or production aspect of technology. The idea was discussed by Sraffa (1926) who amplified Marshall's view that internal and external "returns" differed since internal and external efficiencies were different. However, internal efficiency depends on the decision period, time horizon, and discount rate. Internal efficiency is often seen as a SR or VSR measure that is easily measured by the firm. However, it can be difficult for the firm to assess how technological change affects internal efficiency.

Allocative efficiency is also related to technology in several ways. First, when allocative efficiency is measured, a distinction is made between cost-based, revenue-based, and profit-based allocative efficiency, depending on whether cost minimization, output maximization, or profit maximization is the yardstick [Chavas and Cox, 1999]. Allocative efficiency is an external efficiency to the firm. Therefore, unless the firm has market power, it will have to make medium term adjustments as the market maintains allocative efficiency.

Dynamic efficiency is known also as adaptive efficiency. The decreasing costs over time of hypercommunication technologies are sometimes considered examples of dynamic efficiency. For example, the new, lower price of a three-minute New York to London telephone call became only $21.00 in 1936 [Oslin, 1992, p. 281]. A much higher quality direct-dialed call costs less than $0.50 today. (In real terms, the difference is even more dramatic since $21.00 in 1934 had the buying power of $260.37 today, meaning the call was 536 times more expensive in 1939 than in 1998). Innovation management and managerial economics can be used to consider how internal, external, and dynamic efficiency translate into innovative efficiency, or the ability to retain a particular kind of efficiency given technical change.

A final point concerns the argument that technological control can be a managerial weapon that can actually ruin efficiency in certain cases. Specifically, some argue that a "dictatorship of high-tech management" based upon "endemic, structural" managerial distrust of workers means that lower-tier high technology workers end up with a return to the "despotic factory regime" [Devinatz, 1999]. In this view of the labor market, rather than melting away labor market failures economic theory suggests, the asymmetric information and better technology available to firms are used as weapons to tightly monitor and control workers. Worker privacy can be invaded while every aspect of performance is quantified from average length of bathroom breaks to mean time spent handling customer inquiries. If efficiency is not tinged with qualitative considerations, the firm can be left with a harried, terrified workforce that seeks to minimize the time spent helping customers or performing tasks.

2.3.4.2 Flexibility and managerial dimensions

One way to link the three efficiencies to managerial dimensions such as size, scope, and system is by thinking about the flexibility of the firm. Operational, tactical, and strategic flexibility play roles in each dimension much the same way that the traditional VSR, SR, LR, and VLR do in conventional analysis. The evolutionary school's view of the flexibility of the firm helps clarify the idea of additional dimensions beyond scale and size to include system, span, and scope. Rather than complicating things, ideas about flexibility link existing economic concepts to a modified, systematic view of the economics of production.

Carlsson (1989) argues that the "flexibility" of a firm has more dimensions than suggested by the conventional notion of how the firm adapts to output demand fluctuations based on the shape of the cost curve (Stigler, 1939). Instead, there are three kinds of flexibility: short-term (operational) flexibility, medium-term (tactical) flexibility, and long-term (strategic) flexibility.

Operational flexibility is "built into the 'software' of the firm in the form of procedures, which permit a high degree of variation on a daily basis in sequencing, scheduling, etc." [Carlsson, 1989, p. 201]. On the technical side, operational flexibility is assisted by IT, better communications, and by networked technology. On the economic side, the improved knowledge helps the firm take advantage of discounts, real-time pricing, and auctioning to become more efficient economically in production and inventory. Required materials can be bought cheaply in bulk, while carrying costs are minimized through better timing. Internal efficiency is the main efficiency focus of tactical efficiency.

Tactical flexibility is:

built into the technology, i.e., the organizational and production equipment, of the firm which enables it to deal with changes in the rate of production, or in product mix over the course of the business cycle, as well as moderate changes in the design of its products. [Carlsson, 1989, p. 201]

Tactical flexibility is assisted on the technical side by networking infrastructure and high tech equipment, along with a corporate intelligence stock that optimizes control and information over transportation, marketing and finance. On the economic side, the firm benefits from economies of system and scope. Economies of management, marketing, and finance result from size and efficiency.

One aspect of tactical flexibility is increased customization due to computerization.

Consumers . . . should be able to look to a future where they will not need to compromise as much as hitherto with the manufacturer's conception of the 'median taste' . . . although the biggest gain will go to those with snobbish or otherwise idiosyncratic tastes. [Leijonhufvud, 1989, p. 171]

The ability of firms to tailor their hypercommunication technologies to specialized customer needs and the increased use of personalization in business-to-business (B2B) e-commerce are additional examples.

However, the new ability to customize does not banish scale economies to the old economy. The fact that

smaller batches will become economical does not mean that the economies of large scale are weakened. It means, rather, that the economies of assembly line production can be obtained while turning out differentiated products. [Leijonhufvud, 1989, p. 171]

The third flexibility, strategic flexibility, takes advantage of built-in operational flexibility and the firm's greater ability to customize output and communications. However, strategic flexibility is even more far-reaching because it:

encompasses the ability to introduce the new products quickly and cheaply, to accommodate basic design changes, and, most importantly, the nature of the organization of the firm and the people in it, their attitudes and expectations, particularly with respect to risk-taking and change. [Carlsson, 1989, p. 201]

Strategic flexibility depends on communication technology, the information literacy of employees, and the corporate culture. It depends on the technical ability of the firm to copy and distribute non-rival inputs. On the economic side, dynamic efficiency, the ability to recognize new opportunities and threats and operationalize response is enhanced by the flow of corporate intelligence and resulting lower unit costs of research and development.

The three flexibilities are closely related to familiar terms from managerial and resource economics. The first dimension is the distinction between real (or technical) economies of scale and pecuniary economies of scale. Here, the idea is that technical economies of scale come about due to "savings in labor, materials, or equipment requirements per unit of output resulting from improved organization or methods of production made possible by a larger scale of operations" [Viner, 1931, p. 213]. Quantity discounts made possible from a larger size (rather than from increased production using existing size and technology) are one example of pecuniary internal economy. Again, the difference between the purely technical and the economic view appears important. Pecuniary internal economies are most correlated with operational flexibility, while real economies flow to all levels.

A second dimension concerns the size of the firm relative to the industry, or whether any economies of scale or size are internal or external. In this sub-section, the discussion concerns internal economies, but external economies are considered in the context of technology and supply in 2.3.5. Note that internal economies include economies in management, marketing, and finance, but that these may be external also. From the first dimension, the size-scale distinction can be viewed according to the difference between real and pecuniary economies of scale. From the second dimension, the size-scale distinction can be viewed as internal or external.

2.3.4.3 Problems in measuring technological change

Technological change can conceivably be measured as a rate, direction, velocity, discrete jump, or component of the rate of economic growth. The induced innovation and endogenous growth agendas are most readily represented by conventional economic models, while the evolutionary and path dependent approaches are establishing their own theoretical and empirical methods for measuring technical change. To some, the debate among the four agendas hardly bears on the economist's role of providing empirical measuring rods. For example, while Chambers remarked "there appears to be no exact consensus on just what causes technical change", he did find that "technological change and its consequences for observable market behavior" could be (albeit imperfectly) described by conventional economics [Chambers, 1984, Ch.5, p.1]. This could be done either by defining technical change as a continuous phenomenon and using differential calculus or by defining it discretely using index numbers of production or cost. In either case, the ability to model technical change depends on mathematical assumptions and restrictions on technology as represented by conventional production or cost functions.

To others, conventional economics' limited ability to measure new technological phenomena has necessitated new research agendas. According to Arthur (1990), economics has "suffered from a fatally simple structure imposed on it in the 18th century" as it struggles to become a field where path dependencies and synergies are recognized. By discarding outmoded (and possibly violently incorrect) measurement methodologies Arthur argues, " . . . economists' theories are beginning to portray the economy not as simple, but as complex, not as deterministic, predictable and mechanistic but as process-dependent, organic, and always evolving" [Arthur, 1990, p. 97-98]. The best set of measuring rods may be from biology, nonlinear physics, and nonlinear probability theories in non-static (even chaotic) environments where multiple or punctuated equilibria "phase lock" the outcome away from Newtonian order.

The difficulty encountered in decomposing a technological change graphically as illustrated in Figure 2-12 is multiplied when a more general approach to measurement is taken. Measuring technological change depends on several factors. First, measurement depends on how technical change enters the production, cost, and profit functions. Second, it depends on whether the rate of technical change, total factor productivity, partial factor productivity, or another measure is taken. Third, the empirical measurement of technological change depends on the periodicity used, especially when technological change is a residual or trend term in a regression equation.

Technology can be modeled as a longitudinal change in a single firm's production function or as how production functions differ among firms. This is even consistent with evolutionary work such as Dosi's (1984) approach of identifying examples of technological change in separable technical and economic features of product and production process. It also can be measured as an explicit direct input, as an indirect input, or as a residual measure that equates the passage of time with technological change.

Many economists find there is a more efficient approach to modeling technology's effect on a firm or economy than constructing a production function for every possible case. While it may be part of the engineer's paradigm to map the details of production functions explicitly, the economist finds a more efficient approach to be to set up some general properties of technologies. Jehle (1991) explains the rationale:

If we begin with a technology and derive its cost function, we can take that cost function and use it to generate a technology. If the true technology is convex, the true and implied technologies are identical. If the true technology is not convex, the implied technology is a 'convexification' of the true one. Moreover, any function with all the properties of a cost function implies some technology for which it is the cost function.

This last fact marks one of the most significant developments in modern theory and has important applications for applied work. Applied researchers need no longer begin their study of the firm with detailed knowledge of the technology and with access to relatively obscure data. Instead, they can concentrate on devising and estimating flexible functions of observable market prices and output and be assured that they are carrying along all economically relevant aspects of the underlying technology from the estimated cost function. [Jehle, 1991, pp. 237-238]

Some of the difficulties with the dual approach have already been explored. Two aspects in particular are considered. First, modifications of the dual approach to multiple outputs, non-homothetic production functions, and DEA (Data Envelopment Analysis) or the frontier production function offer hope for a modified dual approach to remain the cornerstone of measurement.

The approaches of alternative schools to non-continuity, alternative objectives, multiple technical changes, and organizational aspects of measuring technical change can be outlined. In cases where duality theory is unable to reconstruct certain technologies, economic factor interdependence and the direct and indirect importance of relative prices are considered. Alternative schools show potential in creating empirical methodologies capable of testing their theories and quantifying technical change in new ways.

The separability of technology and the treatment of technology as a residual in the production function are examples from a large set of conventional restrictions that alternative schools question. Chambers notes that separability "requires that the elasticity of the marginal product of xi with respect to xk equal the elasticity of marginal product of xj with respect to xk" [Chambers, 1984, p. 1-28]. This imposed symmetry seems hard to balance with non-neutral technological change if technology itself was a variable in the production function. In terms of the cost function, separability implies Allen elasticity of substitution term ?ik will equal ?jk, or "complete equivalence of the substitution effects" [Chambers, 1984, p. 2-62]. The issue is a complex one when continued to the profit function case because separability of the cost function and separability of the profit function have different implications for the underlying technology. As Chambers states, "separability of the profit function does not necessarily imply separability of the cost function" [Chambers, 1984, p. 3-34]. Separability of the cost function involves movement along isoquants, while separability in the profit function places restrictions on derived demands along isoquants and as output changes [Chambers, 1988]

There are many ways of modeling the economic aspects of production, using inputs as independent variables or using output as the independent variable. The firm can seek to maximize profits or output or to minimize costs, subject to constraints. So far, the discussion has centered only on how technology changes the production function alone. However, technology itself can change the nature of the firm's profit function, cost functions, derived demands for inputs, conditional factor demands, and the opportunity costs of binding constraints (shadow prices).

Mathematical development of all the "economically relevant aspects of underlying technology" that can be captured using duality may be found elsewhere. The main point is that the dual approach can be used to observe the behavior of technology in cost and profit functions, not just in production relationships. As Jehle mentions:

Just as with cost-functions, there is a full set of duality relations between well-behaved profit functions and well-behaved technologies. In both its generalized and restricted forms, every function with the required properties is the profit function for some technology with the usual properties. The generalized profit function is therefore sufficient to characterize competitive firm behavior when all factors are variable, and the short-run profit function is sufficient to characterize behavior when some factors are fixed. [Jehle, 1991, p. 249]

However, increasing returns to scale are not consistent with profit maximization in conventional thinking. Hence, the generalized profit function is not sufficient.

The argument that duality reduces the economic role of technology in production to an artificially simplistic level due to restrictions is being mitigated by advances in analysis that push the boundaries of the approach into the information economy [Chambers, 1997]. Just as Morgenstern (1963) predicted, the very technologies and organizational arrangements that previously restricted the approach are allowing it to become more productive.

2.3.5 Technology and Supply

The next major role played by technology in the information economy is in industry supply. On an aggregate level, improved technology can shift an entire industry's supply curve (or an entire economy's supply) outward, but the change does not happen in an economic vacuum. Three topics of importance to agriculture are considered. The first topic concerns decreasing cost technologies and industry supply behavior. Another topic is the so-called treadmill theory of technology. The third topic concerns the supply-push explanation for the source of technology.

If all firms in an industry use a homothetic technology with increasing returns to scale and operate at maximum technical efficiency, then a proportional increase in all inputs will lead to a greater proportional rise in output [Chambers, 1997]. If all firms in an industry enjoy increasing returns to size, then increases of all inputs in least cost proportion lead to a greater proportional rise in output and decrease in costs. For larger, more complex firms, if all firms in an industry enjoy increasing returns to scope or organizational form, management, etc., increases of all inputs in optimum combination will lead to greater proportional increases in multiple products (in optimum combination). Dynamically, increasing returns and decreasing per unit costs decrease output price. Firms then attempt to decrease costs further by becoming larger, but this decreases output price again. Hence, the idea of a treadmill comes from the continual attempt by firms to adopt new technology to make up in volume what is lost on a per unit basis from industry-wide adoption of new technology.

The attempt to name the kinds of industries where increasing returns prevailed led to a classical debate between Lord Clapham and Pigou in the 1920's, along with influential papers by Robinson and Viner [Clapham, 1922a, 1922b; Pigou, 1922; Robertson, 1924; Sraffa, 1926; Robinson, 1926; Viner, 1931]. Ever since that time attempts to definitively identify decreasing cost industries have been stymied in economics. Network, evolutionary, and path dependent approaches appear bent on using new methods to identify the underlying technologies that create external economies of size. For example, from the evolutionary perspective, "It is essential to allow price signals a more dynamic role than that of sustaining equilibrium responses" [Nelson, Winter, and Schuette, 1976, p. 91].

A second topic in technology and supply is the technology treadmill argument. In the perfectly competitive world of production agriculture, technology is often likened to a treadmill. Early adopters jump on first often achieving super-economic profit. If the technology is skewed towards a larger scale or size of operation, this (along with the fact that competitors must adopt or exit) exacerbates the technological change. Supply shifts out, price downward, and only with increases in size can firms survive.

A third topic concerns the supply-push, science-push, and technology-push views of innovation that "science and technology" are a "relatively autonomous process leading to industrial innovation" [Mwamadzingo, 1995, p. 1]. Invention, innovation, and macroeconomic growth because of technology spillovers are important roles technology plays in supply affecting agriculture.

Research and development investments by firms and government produce technology as an output [Jaffe, 1986]. Technological spillovers happen because:

(1) . . . firms can acquire information created by others without paying for that information in a market transaction, and (2) the creators (or current owners) of the information have no effective recourse, under prevailing laws, if other firms utilize information so acquired. [Grossman and Helpman, 1991, p. 16]

Such technological spillovers are particularly important to the process of economic growth:

The general information that researchers generate and cannot prevent from entering the public domain often facilitates further innovation.

. . . Thus innovation conceivably can be a self-perpetuating process. Resources and knowledge may be combined to produce new knowledge, some of which spills over to the research community, and thereby facilitates the creation of still more knowledge. [Grossman and Helpman, 1991, p. 17]

Hypercommunications especially facilitate the spillover process because "rapid communication and close contacts among innovators in different countries facilitate the process of invention and the spread of new ideas. " (Grossman and Helpman, 1991, p. xi)

Roger Noll summarizes: "research--in the last few years--which has put some serious meat on an explanation" of "basic or undirected non-commercially oriented research that takes place in universities, national laboratories" that "leads to expansion of the economic base of the nation and the national welfare, not only in the U.S., but worldwide" [Noll, 1996, p. 37]. He adds:

The research in the last five years has found it remarkable that the productivity of privately applied product development research is higher in areas which are near a university than in areas where it's not. [Noll, 1996, p. 38]

In agribusiness, the role of the Extension system in technology transfer is an obvious one and may depend on how "near" firms are to the university through communications and desire for new technology. Production agriculture's marketing and production functions should be enhanced the most by hypercommunication technologies in rapidly consolidating sub-industries such as Florida's growing billion-dollar nursery-greenhouse industry. Rural communities' future industrial, educational, and employment skill bases are each dependent on rapidity of technology transfer to information economy.

A final note comes from Douglass North who points out that "productivity increases result from both improvements in human organization and from technological developments" [North, 1994, p. 1]. Until now, few attempts have been made to distinguish between institutional innovations and purely technological ones. Importantly, technological innovations from IT and hypercommunications have enabled a variety of new institutional arrangements. While these affect supply by "lowering either transaction and/or transformation costs" at the firm level, they come from outside the firm [North, 1994, p.2].

Of North's four institutional innovations that have classically lowered transactions costs through history, all four (increased mobility of capital, lower information costs, creation of risk spreading institutions, and improved enforcement of contracts) have escalated due to IT and better communications. Indeed, wire funds transfer, real-time market data, real-time trading and auctioning, and lower costs of communication are hallmarks of the information economy. Communications and information technologies, while external to the firm, can provide internal economies (Sraffa, 1926) for well-organized firms especially when the technologies allow firms and industries to participate in new institutional arrangements.

2.3.6 Technology and Demand

The role technology plays in consumer and producer demand is the fifth major role of technology in the information economy. Specific technologies affect the demand-side or consumption in several ways. In hypercommunications, for instance, advances in software and hardware allow advances in hypercommunication services, while de-regulation offers consumers more choices. For agribusinesses, new production and distribution technologies can preserve freshness, improve quality, and promote better food safety, leading to greater consumer confidence in products on the demand side. In this sense, demand may become more inelastic or a new, differentiated product may give the firm an advantage. However, if consumers misunderstand or fear the technology, their negative perceptions may hinder the profitability of technology as in the case of food irradiation or genetically engineering foods.

In this section, three topics are briefly explored. First, the ability of technology to shift the demand for an existing good is a promising way hypercommunication technologies can influence agribusiness markets. Second, the introduction of a new product or service is subject to a diffusion of innovation process both for the introduction of a completely new good to consumers and for the penetration of technological inputs in agribusiness. Third, the competitive effects of technology can stimulate or discourage competition.

The economics literature contains many examples of the role of technology in demand. The first two topics result from two of Schumpeter's five categories of technological innovation. Schumpeter's other three categories relate to demand indirectly through supply. The first category of technological innovation is a new good or new quality of good. By increasing consumer choices, technology produces new substitutes and complements for existing goods in addition to creating entirely new products. The second Schumpeterian category of technological innovation is a new method of production. This form of innovation has already been considered in 2.3.2, but new production methods can alter derived demands for inputs by firms. The third category, the opening of a new market for an existing good, is clearly relevant to agribusiness and hypercommunications, as globalization of markets opens new opportunities. Demand in these new markets can stem from new uses for a good due to technology, or through a widening of market boundaries through better communications or improved logistics. The fourth technological innovation is the discovery of new resources or intermediates. Again, this category relates more to a production and supply framework. The fifth and final technological innovation category named by Schumpeter is a new organizational form. Organizational form relates mainly to technology's managerial role and only indirectly to demand through the demand-sensing and demand-serving roles of agricultural marketing firms.

2.3.6.1 Technology and demand for existing products

The first topic is technology's ability to shift demand for an existing consumer good or producer input. The simplest way to see this is to consider the case where technology creates a new product that is almost a total substitute for an existing product. The demand for computers versus the demand for typewriters is one example. Even before the PC word processor displaced the electric typewriter, electric typewriters tended to substitute for manual typewriters. The main issue here concerns how firms can speed up (or slow down) the speed with which the new supplants the old.

Similarly, on the demand side in input markets, new technology may stimulate demand for one (or more) inputs while shifting the demand for others inward. In many ways, technology's role in demand is familiar territory for agribusinesses. This is most true for the producer's derived demand for inputs. Gould and Ferguson noted this in 1980:

It should be apparent that technological progress changes the marginal productivity of all inputs. Thus a technological change that makes a variable input more productive also makes the demand for any given quantity of it greater, and vice versa." [p. 360]

This point was discussed in detail in 2.3.3.

The issue of technology in demand is a particularly important one in the network economics of the path dependent school. In addition to edging out old substitute technologies, new technology creates perfectly complementary network technologies. Once the Windows operating system is installed on a computer, MAC software packages become instantly useless, while software applications that are compatible with Windows become perfect (if they work!) complements. The size of the installed base becomes an extremely important predictor of demand. "Demand" economies of scale occur for a product if "the more customers that already own it, the more want it" [Afuah, 1998, p. 362].

This can be based on bandwagon or Veblen effects [Liebenstein, 1948] or due to network externalities [Katz and Shapiro, 1985]. Network economics is discussed in more detail in Chapter 3. However, the idea of a network externality is based on the value of a good rising as more have access to it. One example is that of the telephone. A telephone that connects its user to one other station is less valuable that the same telephone would be if it connects its user to 400 million other telephone subscribers. Yet network externalities on the cost side mean that as costs are spread across more users, the cost of each telephone falls dramatically.

The path dependent view includes Arthur's (1989) statement that increasing returns "might drive the adoption process into developing a technology that has inferior long-run potential" [Arthur, 1989, p. 117]. However, he admits that "sometimes factor inputs are bid upward in price so that diminishing returns accompany adoption" [Arthur, 1989, p. 117]. At first glance, such path dependent outcomes from network economics (see also Chapter 3) might appear only to affect IT and knowledge-based industries.

However, many of the technologies in agriculture (typically not defined as knowledge-based) change demand for inputs in similar ways. For example, the practice of monocropping (while condemned in some circles) could be an example where farmers band together to take advantage of management and equipment networking efficiencies they would be unable to use otherwise. For example, if most farms in a certain area plant wheat or cotton (installed base), then co-operatives, equipment sharing, and exchanges of cultural advice among similar operators is a de facto path dependent network. Equipment sharing, discussion of cultural practices, similar chemical and fertilizer requirements, labor arrangements, and other practices could be considered network effects that are locked-in due to an installed base. Hypercommunication and IT simply offer agribusinesses new institutions with a larger possible size to accomplish these networking activities.

While these references are to supply and input demand, specific network technologies also influence consumer market demand for the agribusiness products. For some crops, oranges, avocados, and livestock, longer life cycles (seven years until first production from avocados, for example) are examples of lock-in to an installed base. However, networks of producers (called producer organizations) often use communications technologies (such as advertising and promotion) to influence demand. New methods of food shopping (Internet groceries or direct farm-to-household sales), along with the trend towards nutraceuticals, organic foods, and humane production are a few of the ways technology affects market demand in agribusiness.

2.3.6.2 Technology and demand for new products: diffusion of innovation

The second topic in the technology of demand is the diffusion of innovation. Diffusion of innovation is the process whereby a given technology goes from theory to practical use. Importantly, the diffusion process is indirectly at work in the economics of technology and production, and, hence, supply too. However, it is most straightforward to analyze diffusion of technology in terms of demand for a new technology by using proxies such as sales and penetration rate.

In this process, several actions occur such as technology transfer, invention, innovation, and adoption. Often an S-shaped curve (with sales or penetration fitted to time) suggests that (after research and innovation are complete), the diffusion process consists of six stages, early, middle, and late adoption, followed by early and late maturation and then decline. Empirical research from marketing documents two important aspects of diffusion theory for consumer products that applies here [Bass, 1969; Bass, Krishnan, and Jain, 1994]. First, diffusion is a social and educational phenomenon whereby the current level of diffusion depends on how many people have already adopted the technology. Different socio-educational or socio-economic groups with particular psychographic profiles (innovators, early adopters, early majority, late majority, and laggards) are associated with each stage [Daberkow and McBride, 1998]. Second, firms that seek to influence the rate of diffusion by using marketing decision variables such as advertising or strategic pricing can speed up the diffusion of technology. At first glance, these results would appear to be mainly related to technology and demand. However, the diffusion process is at work in the adoption of a new production technology that affects the speed of supply response to technological change. Finally, market competitiveness and market efficiencies are both affected by rate of technological change and the behavior of prices through time. These can depend on the diffusion of innovation, but they are also related to the competitiveness of the market subject to the technological change.

2.3.6.3 Technology and competitiveness

A third role played by technology in demand is its effect on the competitiveness of markets. Market competitiveness may be on an intra-industry level, or it can be inter-industry competitiveness due to technology's ability to redefine substitute and complementary goods. The latter point is particularly applicable to hypercommunications convergence. For example, BellSouth has gone from natural monopoly to a firm that competes with other LECs (Local Exchange Carriers) for local dialtone and other telephone services. However, convergence and deregulation increase the set of competitors to include cable TV firms, ISPs, electric utilities, and satellite providers.

Technological progress can be pro-competitive, anti-competitive, or neutral. As McNamara wrote in 1991 concerning telecommunications technology:

Technological change is usually defined as an increase in the menu of existing techniques for producing goods and services or an increase in the number or types of products that may be produced. The selection of new productive techniques or new products or both from the enhanced menu of technological opportunities depends on the relative economic advantages of those decisions. Included in the term economic advantages may be some extremely sophisticated economic and political strategies that may be intended to strengthen the firm's long-term position in its market by deterring competition, rather than just to exploit some immediate cost or market advantage. [McNamara, 1991, p. 127]

Under this view, firms such as AT&T or AOL-Time Warner may seek to reduce consumer choice through mergers and acquisitions. AT&T is now able to offer local telephone service, cable TV, high speed Internet, cellular, PCS, and data networking services through its purchases of cable TV giants MediaOne and TCI. By redefining the field of competition (while on its acquisition spree), AT&T scored regulatory approval to re-enter the local telephone business.

Grossman and Helpman (1991) argue, however, that technology's effect on competitiveness is, in reality, a net social positive because of technological spillovers. Technology may be defined as a non-rival good, so that "when one agent uses technology to produce a good or service, this action does not prevent others from doing so, even simultaneously" [Grossman and Helpman, 1991, p. 15]. Furthermore, they add:

technology in many cases is a partially non-excludable good. That is, the creators or owners of technical information often have difficulty in preventing others from making unauthorized use of it, at least in some applications. [Grossman and Helpman, 1991, p. 16]

The hypercommunications market structure eventually will depart from the extremes of perfect competition (or perhaps monopolistic competition) of ISPs on one hand, and "natural" monopoly of ILECs on the other hand. Before hypercommunications convergence, these were separate markets for separate services in co-existence. Convergence due to technology may result in a mixture of large, complex telecommunications firms and engineering-intellectual entrepreneurial players engaging in a fast-paced game of mergers, acquisitions, and strategic partnerships designed to gain market power. It is even less clear how hypercommunication technologies (or other new technologies) will affect the market structure of agriculture. Large agribusinesses may become larger and perhaps fewer. Yet, innovative agricultural producers may benefit from new market niches, leaving competitors who fail to adopt hypercommunications (or other) technologies behind.

Many other topics regarding technology and demand exist beyond the three just mentioned. One concerns how demand shall be analyzed theoretically or empirically when rapid technological change results in successive quality generations of durable products. For example, a computer in 1950 and a computer in 1995 are dramatically different. This first problem is hardly new to empirical attempts to model demand and forms the rationale for approaches ranging from index number construction to compensated demand theory. A second technology-demand topic is the implication on market efficiency and welfare due to improved information and lower search costs from technology. This second topic leads to the third foundation of the information economy, information to be covered in 2.4. However, first it is important to note that technology and information have so close a linkage (beyond the obvious information technologies) that it can be hard to differentiate the two.

2.3.7 The Technology-Information Linkage

The sixth and final role of technology in the information economy concerns the linkage between information and technology. Technology's relationship (through hypercommunications) to the conceptualizations of information (section 2.4) is important economically to the firm because information can be an input to a profit-generating governance strategy and an output. Hypercommunication technologies enable firms to improve internal technologies based on newly discovered opportunities found because of better information.

Thus, technology's definition itself depends on how information is conceptualized and vice versa. Hypercommunications requires a particular set of technologies that improve the exchange of information. Hypercommunication technologies tend to be invisible to users of hypercommunication services. Recall that hypercommunication technologies denote the hardware, conduit, software, and protocols within the "applied science" that power communications alone. The usefulness of hypercommunications as an input is two-fold. First, it provides a direct effect by reducing the costs of existing activities due to the use of hypercommunication technologies. Second, there is an indirect effect that stems from new activities, interactions, and non-hypercommunication technologies made possible because of increased information.

Therefore, hypercommunications can result in technology spillovers covering all sectors of the economy, not just within the hypercommunications sector. Hence, hypercommunication technologies are part of a class of technologies that have both direct and indirect effects. This underscores the importance of access to hypercommunications for agribusiness, production agriculture, and rural communities. Improvements in hypercommunication technologies enable innovations, technology spillovers, and growth for agriculture, even though the applied science of agriculture and applied science of hypercommunications differ substantially. Information technologies (IT) represent an important overlap between the two. This overlap improves innovation, implements production efficiencies, allows communication and control over greater distances, promotes access to more information, and better organizes market intelligence. However, such technologies are most productive when interconnected to internal or external networks through a hypercommunications infrastructure.

Monk delineates links between information and technology in much the same way that economists distinguish innovation from invention. A technology is composed of sets of information while technological information is the means by which technology is implemented. Further, "sets of information which have no potential use value in production cannot be considered as part of technology". Monk also distinguishes between the absolute and effective states of technology based on how available information concerning that technology is. The "absolute state of technology" depends "on the content of the technological information sets that exist". However, the effective state of technology "depends on both the existence of technological information sets and on the availability, distribution, and allocation of the embodied forms of that information" [Monk, 1992, pp. 38-39].

2.4 Information, the Third Foundation

Like technology, information is a fundamental concept of the information economy, as well as central to the study of hypercommunications. However information is often defined so broadly (or not at all) that it is often hard to pin down a useful economic meaning--even in the context of buzzwords such as "information age", "information superhighway", "information society", and "information elites". The truth of this is pointed out by Braman: "The abundance and diversity of definitions of information bewilder" [Braman, 1989, p. 233]. Instead of trying to reveal a single "true" linguistic definition, this section offers several operational definitions in the context of hypercommunications and the economics literature. Whatever it means, information's critical economic role has been highlighted through the phrase information economy.

Seven conceptualizations and economic roles of information are discussed that go beyond basic dictionary definitions. The seven are shown in Table 2-3. First, information may be considered as a stock or a flow. A second, closely related way to view information is as a resource or a commodity [Braman, 1989, p. 235-236]. A third way labeled by Braman as "perception of pattern" is considered here as a group of processes: information processing, information diffusion, information literacy, and information used to solve problems.

Table 2-3: Conceptualizations and economic roles of information.
Conceptualization Role Location
Stock or flow Method of pricing 2.4.1
Resource or commodity Input (data), output (information) 2.4.2
Perception of pattern Information literacy, diffusion, processing, and use in problem solving 2.4.3
Major economic properties of information Direct and indirect source of value, search costs 2.4.4
Public good Externality, imperfection 2.4.5
Asymmetry and symmetry Determinant of competitiveness and efficiency 2.4.6
Other economic properties of information Uncertainty reducer, organizational aid, information as a "bad" 2.4.7

The last four conceptualizations are developed from important strains of the economics literature. A fourth conceptualization of information includes major economic properties such as the direct value of information or through indirect values such as search costs. Fifth, information may be conceptualized as a public good. Sixth, information can be a market competitiveness variable due to information asymmetries among economic agents. The seventh and final conceptualization is a catchall category to include other economic properties of information.

One dictionary definition of information begins with the transitive verb inform as meaning:

1. a) To give form or character to; to be the formative principle of. b) To give, imbue, or inspire with some specific quality or character, animate. 2) [Rare], to form or shape the mind; teach. 3) To give knowledge of something to; tell, acquaint with a fact, etc. [Webster's New World Dictionary, college ed., 1960, p. 749]

Knowledge, learning, and wisdom are sometimes considered to be synonymous with the noun information which "applies to facts that are gathered in any way . . . and does not necessarily connote validity" because there is also "inaccurate information". Knowledge "applies to any body of facts gathered by study, observation, etc. and to the ideas inferred from these facts and connotes an understanding of what is known". Learning "is knowledge acquired by study; wisdom implies superior judgement and understanding based on broad knowledge" [Webster's New World Dictionary, college ed., 1960, p. 750]. However, these linguistic distinctions do not hold uniformly across all seven conceptualizations of information.

2.4.1 Information Conceptualized as a Stock or Flow

Under this conceptualization, information can be measured. Indeed, stock and flow measures are regularly used in billing market exchanges between hypercommunications suppliers and their customers. Stock or flow measurements may also be needed for accounting reasons, to provide a numeric tally, or for broader strategic or philosophical purposes. This conceptualization is applied, rather than theoretical, and is worth attention because information exchange is often priced based on stock and flow measures.

As Chapter 4 will demonstrate specifically, pricing of hypercommunication services is often characterized in four ways: metered, measured, unlimited access, or content subscription. The prices encountered in the market are sometimes a combination of these four but do not include fixed installation fees or recurring equipment charges. One metered service that is most familiar is the long-distance telephone call, priced typically on a per-minute basis for lineside services. Metered services charge customers based on units actually used. Other metered hypercommunication services are priced by bits transmitted.

Measured services typically are sold on a plan basis where up to a certain number of bits or minutes are assessed at one overall rate regardless of the amount actually used. Additional units over the plan level are often priced at a higher metered rate. Many cellular telephone plans are examples of one type of measured service.

Hypercommunications carriers regularly bill for other measured services based upon the number of bits transmitted by a circuit within a particular period rather than length of transmission in time or distance. Additionally, the flow of information down the circuit is compared to a pre-subscribed capacity for measured "bandwidth on demand" services. With frame relay service, customers are guaranteed a Committed Information Rate (CIR) where a minimum bandwidth is guaranteed available when needed for a fixed plan price. However, a higher (metered or measured) rate is paid to transmit information above the CIR up to a certain bandwidth ceiling when available. The bandwidth measure itself is a stock measure of information carrying capacity, rather than a flow measure of information actually transmitted.

Third, there is unlimited access pricing which is the industry standard for narrow-band dial-up Internet access for example. An (ISP) Internet Service Provider's customer is charged the same amount whether connected to the ISP's modem bank for one minute in a month or 100,000 minutes a month.

Finally, information can be priced based on content or accessibility. Subscriptions to magazines and market newsletters are old examples of pricing by content. In the hypercommunication era, they are joined by subscription-only websites and online research services that respond to e-mail research queries. Online sales of music tracks (MP3 files) and live or recorded video feeds are other examples. New software and hardware technologies can prevent unauthorized copying and retransmission or trace violators of rights to the material. This point is returned to later during the discussion of the value of information.

Billing for information is not the only reason it is counted. Examples of non-accounting numerical tallies of information abound. Such tallies are used to summarize hits (visits to a webpage), numbers of customers communicated with, turnaround times for information requests, etc. Another numerical tally is the stock of information in a firm's "information warehouse" defined by Sheldon as "an entity that allows end users to quickly and easily access an organization's data in a consistent way" [Sheldon, 1998, p. 499]. Such tallies help management measure and evaluate information publishing and exchange activities. A stock measure of information would be the system's capacity at a given time, with flow measures being summaries of activity. On a broader philosophical basis, phrases such as the "stock of knowledge" apply as well.

Before discussing other conceptualizations, it is important to realize that each conceptualization of information may not have a direct economic role or a role in the human process of communication. For instance, the first conceptualization of information was important from the accounting and computer engineering design viewpoints. It has a direct economic role in pricing and an indirect role in internal and external cost structures. However, volume-based engineering measurements of information tend to ignore the data-information distinction (covered next) because the content of a message is not strictly relevant to the means of transmission.

2.4.2 Information Conceptualized as a Processed Resource or Raw Commodity

The next conceptualization considers information depending on whether it is an engineering or accounting description of carrier service, production input, intermediate product, or processed output. Within hypercommunications, this point is easily seen through the conceptual conflicts between information and data. From one standpoint, organizing, re-organizing, and processing information is a production process that uses inputs of data (or raw information) to yield an information output. Under this view, information is an economic resource that can be an input, output, or both. From an engineering standpoint, the term used to describe communication contents (whether called data or information) is not necessarily relevant. Instead, quantitative concepts such as the timely, speedy, and error free transmission of digitally-coded communications treat the transport of information as a commodity.

One source of such a distinction is whether a firm creates and sells information or simply carries data. To the information producer, there is an important difference between data and information, a distinction that is absent from the communications carrier's worldview. Students of behavioral and physical science are often told in introductory statistics texts [see, for example, Summers, Peters and Armstrong, 1985, p. 2] that the terms data and information refer to different concepts. Data are "unorganized facts or figures from which conclusions can be inferred" [Webster's New World Dictionary, college ed., 1960, p. 374]. Presumably, once those data are organized, the result is an output of information, a value-added product.

According to LaFrance the economist's role is itself based on the distinction between data and information:

In an information age, it is increasingly important that we do not confuse data with information. Information is data which are placed within a particular context. It is the context and underlying conceptual framework that makes the data useful in decision making. Without the context and framework, the value of data is indeterminate. Agricultural economists are often the vital link in producing useful information out of data and defining what data are needed to produce information. Constructing a framework and establishing a context for data are what we do as economists . . . ." [LaFrance, 1993, p. 1]

Indeed, the data-information distinction is the purpose for having information workers in general.

To some hypercommunication firms, processed information itself is sold, as is the case for radio and TV broadcasters or membership-only websites. In such cases, information content is sold as a good or service. To other hypercommunication firms, the reliable and speedy transmission of digital content, together with communications capacity (bandwidth), are sold as services. It is possible that these distinctions will become less important as convergence occurs, but they currently are important philosophically and economically.

To hypercommunications carriers, the term "data communications" has not traditionally relied on any distinction between data and information, instead signifying digital (especially computer) communications. Now that voice, video, facsimile, and computer communications can all be carried by the same digital pipeline, "data communications is all about transmitting information from one device to another" [Sheldon, 1998, p. 213]. To computer systems engineers, any distinction between information and data is unnecessary because the word data is used to signify digital communications services, to quantify throughput, and to charge customers based on distance or quantity in bits. Common use among hypercommunications carriers and computer vendors suggests that information and data are synonyms.

The requirement that data be "organized" in an engineering sense so they can be transmitted does not necessarily change those data into information in an economic or statistical sense. However, according to Novell in 1999, there is an engineering distinction between data and information. "Computer data is a series of electrical charges arranged in patterns to represent information; data refers to the form of the information (the electrical patterns). It is not the information itself. Information is data that has been decoded" [Novell, 1999, p. 4].

The purpose of hypercommunications is to communicate information or data in many message types over a variety of channels and platforms. However, measures of hypercommunications volume and capacity are based on engineering models of communicating data. The value of information "bought" from a hypercommunications carrier to a firm depends on how processed the information is, the ultimate use, how efficiently information is obtained and hypercommunication services are used, and the producer's efficient provision of services. The conceptualization of information as a processed resource or raw commodity is part economic and part technical.

2.4.3 Information Conceptualized as a Perception of Pattern

A third conceptualization of information is as a "perception of pattern". According to Braman: "Information from this perspective has a past and a future, is affected by motive and other environmental and causative factors, and itself has effects" [Braman, 1989, p. 238]. Instead of counting bits or treating information as a homogenous commodity or input, the richness and meaning of content is considered. Information diffusion is a process (similar to the diffusion of innovation process, section 2.3.5) where information is exchanged within an organization or among consumers in a target market. Information literacy has to do with the human ability to find and evaluate useful information. Information processing refers to the psychological tasks people use to remember information and act on it. The combination of information processing, the process of information diffusion, and information literacy allow an organization to perceive information patterns so they can be used in problem solving.

The marketing literature--see Assael [1992, pp. 488-520] for an overview--has applied psychology to the role of information exchange across groups and to diffusion theory. There is often a diffusion process dependent on human individual and group behavior that weighs information before passing it along (or not) depending on perceived expected metaphysical and economic utility. There may be a physiological human limit to the speed of information processing, putting breaks on the rapid dissemination of information past an individual's threshold level. Additionally, information processing is another limit to the speed with which humans can sequentially process information before diffusion can occur.

If computer science models are projected onto human information processing and diffusion behavior without recognizing fundamentals of economics and marketing (which require perceptions of pattern), two simple mistakes can occur. First, the model can be mis-specified if economic behavior is modeled after immutable physical law. One example would be making a hypothesis based on Say's "law" in the same way a hypothesis would be based on Faraday's Law of the Electromagnetic Field. Supply of information does not create an instant demand for transmission of that information without an underlying behavioral need or market, any more than water boils at 212 degrees F. at sea level without heat.

Second, when economic models of new information technologies are chained to the methodology of the inanimate science that developed those technologies rather than human market behavior, a "trees grow to the sky mentality" can arise. Markets can have what Alan Greenspan has called an "irrational exuberance" about information technology's ability to exponentially increase sales and profits, leading to overly ambitious projections of an exponentially increasing information base, and exponentially rising economic benefits as a result. Engineering-based models tend to ignore human and economic constraints on such things as the speed of technology adoption, rate of growth in information processing, and the scarcity of attention.

Growth models that rely on computer science parameters for hypercommunications could end up with huge prediction errors of the future total if they assume that annual growth rates in information industries will continue to be forty percent compounded continuously. Even if information and data flow at faster and faster rates over fatter and fatter pipes ad infinitum, there are many reasons technological change outstrips human ability to change technologies. Information patterning is perhaps the central reason that human information processing and patterning cannot keep pace with engineering progress. Human literacy skills are learned more slowly than broadband networks of fiber optics can be laid. The additional brain pathways medical researchers have discovered the brain physiologically adds due to human exposure to informational stimuli are chemically burned-in at a slower rate as well. There is an enormous capacity for information-trained, information literate workers to improve their productivity. However, at some point even this information elite will be stretched to the limit--if it already is not, given the norm of sixty-hour work weeks and positions left unfilled due to a lack of qualified candidates.

Not all information transmitted through a hypercommunications network is actually communicated, partly due to human inability to absorb exponentially increasing information as rapidly as it can be transmitted. Information overload means the recipients of communicated information are unable to act on, much less process, excess quantities of information at once [Assael, 1982, p. 167]. Furthermore, the rate of information diffusion, another part of the communications process, depends on social interaction, educational level, attention, and other factors.

The field of attentional economics is particularly important here. Arrow once wrote "information exchange is costly not so much because it is hard to transmit but because it is difficult to receive" [Arrow, 1975, p. 18]. An area of economics called attentional economics originally from the psychology literature [Thorngate, 1988, 1990, 1997] considers the scarcity of time within the context of seemingly unlimited information. Thorngate contends there is

. . . a fundamental error in discussions of the information economy. All economies are based on some form of scarcity, but there is no scarcity of information in the world. Instead, there is a scarcity of time to "spend" and attention to "pay" for the information available. . . . It is the scarce commodity that defines an economy of attention governing the relationship between information produced and information consumed.

Enter the Internet. Its low cost, ease of use, high speed, and reliability make the Internet almost perfectly suited to those of us who spend so much of our time producing and consuming information. Therein lies the fundamental dilemma of an attentional economy. Because the Internet is such a good way to distribute and exchange information, we are increasingly using it for these purposes. Information thus proliferates at an increasing rate. Yet, our time remains constant. As a result, the limits of our time force us to pay less attention to more information. By distributing more information more widely, more quickly, and cheaply, the Internet intensifies an already fierce competition for our limited resource. [Thorngate, 1997, p.296]

Processed information is typically more valuable than mere data or unprocessed information. Information becomes valuable through organization, categorization, analysis, and other information processing activities. In this sense, IT processes information for problem solving.

It is from the conceptualization of information as a heterogeneous, behavioral input to be processed into a recognizable pattern and diffused through a firm, that the relevance of information literacy skills arises. The American Library Association gives information literacy this definition:

To be information literate an individual must recognise when information is needed and have the ability to locate, evaluate, and use effectively the information needed. . . . Ultimately information literate people are those who have learned how to learn. They know how to learn because they know how information is organised, how to find information, and how to use information in such a way that others can learn from them. [American Library Association, 1989 as quoted in Dupuis, 1997, p. 98]

An important implication of information literacy is that differences in the ability to deal with information are important explanations of variation in human capital [Shultz, 1975].

Another implication of information literacy is that there will be less reliance on the standard broadcast, newspaper, and Extension Service "gatekeepers" to control and filter information and bring it to general attention. The role of such gatekeepers could be substantially weakened in an Internet society where everyone can read, see, and hear the information they want without centralized editorial filtering. For example, Linoberger and Gwin (1982) identified behavioral barriers suppliers of technology will face if they seek to "extend" the technology to customers in production agriculture, agribusiness, and rural communities.

Linoberger and Gwin also recognized that the diffusion of information depends on a system of public and private organizations working on innovation, dissemination, and integration. The amount of new information about new technologies that can be transmitted depends on the number of "change agents". As the base population of information literate firms and individuals becomes larger, so will the speed of innovation increase within agriculture. Limitations on the increase depend on public and private infrastructure investment, stickiness of adoption and diffusion behavior, and markets.

In addition to the Extension Service publications of agriculture, pre-Internet society had newspaper editors, TV news assignment editors, and others whose job it was to decide what was and was not news. This helped to protect the public from information overload and sell advertising. Information literate audiences may prefer to find and filter information for themselves rather than trust experts. However, because there is information from more sources and in much greater quantity, greater editorial control of information rather than less (with costs offset by advertising) may occur. As Linoberger and Gwin suggest:

we must remember that no matter how sophisticated the equipment is, it exists to serve people's needs. We must still involve the audience in planning uses that fit their needs. They must help frame the questions and develop programs, if the tool is to be useful in the real world. [Lionberger and Gwin, 1982, p. 191]

A new title, the CIO (Chief Information Officer) is becoming common in large agribusinesses to accomplish the gatekeeper and filtering tasks among others [Boar, 1993]. In essence, agribusinesses have their own internal Extension Service staffed by MIS personnel, web designers and programmers, and call center staff. This is supported by Wu et al. who argue that larger organizations require highly trained information users to serve as intermediaries or gatekeepers and process public information to bring specialized information to specific end-users within the firm. Smaller organizations might use a consultant to accomplish the same purpose. Rather than "drive private information sources out of business", government information "makes their role economically viable" because the sheer volume of information and its complexity requires that specialists serve as the new gatekeepers [Wu et al., 1999, p. 12]. Instead of using outsiders (such as newspaper editors or Extension Agents) as gatekeepers, the new gatekeepers for firms are insiders who act as transducers for inter-firm information; a boundary scanner is the transducer for intra-firm information [Afuah, 1998, p. 37].

2.4.4 Major Economic Properties of Information

According to Varian, "The most rapidly growing area in economic theory in the last decade has been in the area of information economics" [1992, p. 440]. This economics worldview was brought about partly by Stigler's seminal paper on the economics of information (1961) which built on the work of Ozga (1960). These arguments were extended by authors such as Colantini (1965) and Grossman and Stiglitz (1980) as well as with concepts such as the efficient markets hypothesis [Fama, 1970].

Fama (1970) named three forms of efficient markets. Kolb provides a thumbnail sketch of them:

The weak form of the efficient markets hypothesis claims that prices in a market fully reflect all information in the history of volume and price. The semi-strong version claims that market prices fully reflect all publicly available information. The strong version states that market prices reflect all available information, whether public or private. Private information includes information possessed only by corporate insiders and government officials [Kolb, 1990, p. 168].

From this work, two major economic properties of information are derived. The first concerns the value of information and the second concerns information search costs. These will be followed with separate sub-sections concerning the treatment of information as public good (2.4.5), asymmetric information, 2.4.6, and other economic properties of information (2.4.7).

Perhaps the most basic economic property of information is the attachment of value to that information. Valuation of information comes from several sources. It helps to decompose value into seven elements. Based on review work from several disciplines (economics, marketing, sociology, and psychology) Sheth, Newman, and Gross [1991] synthesized value into five components: functional value, social value, emotional value, epistemic value, and conditional value. To these five are added two additional components from resources economics, option value and existence value. Each component suggests conditions under which seemingly intangible benefits or costs of a good or service (such as information) may show up in observed market prices. An understanding of these seven components of value explicitly recognizes how information may have value.

Functional value has always been in the economist's domain. Functional value is defined as value in use or exchange resulting from utility maximization. Proponents of economic psychology argue that utility maximization is not the sole determinant of choice. Katona [1951,1953,1963,1975] argued that utility maximization depends on sentiments and subjective expectations so that psychological reality may determine choice under some circumstances rather than price alone. Functional value and utility maximization may still dominate choice under assumptions of perfect information, but the remaining sources of value play an important role in determining utility.

According to Sheth, Newman, and Gross, the social value of a good is derived from its association with one or more distinctive social groups. Thorstein Veblen's conspicuous consumption hypothesis, Katona's fun and comfort needs, and the idea of social class are examples of sources of social value. Social value (used in this sense) could be called sociological value to avoid confusion with "social" cost-benefit accounting in economics.

Information is especially affected by two areas within the framework of social value. First, reference groups may play a role in information consumption. Group pressure to conform may be especially important. For example, the corporate culture of a firm may play an important role in the volume and quality of information the firm uses or buys. Second, opinion leadership is known to play a significant role in the transmission of information. For example, opinion leaders (people who have more exposure to information than others) intervene between the mass media and the opinions and choices of others. The cholesterol information index (Brown and Schrader 1990) is an example of an application of opinion leadership to the problem of food safety. Brown and Schrader assumed that physicians were opinion leaders for their patients in giving advice about healthy, safe foods when they used medical journal articles about cholesterol as a proxy for information.

Emotional value, a third component of value, is covered by a broad range of research in psychology and marketing. The roles of personality, psychological health, and other individual attributes help to form the emotional value of information. Here, the main importance regarding information may rest in the areas of fear of information technology, the individual psychology of information overload, and the role of personality in information literacy.

Epistemic value is considered to come from the capacity of a good to arouse curiosity, or satisfy novelty-seeking or knowledge-seeking objectives. Consumption of information, information-gathering activities (such as web surfing and research), and communication of ideas about new information provide epistemic value for individuals in search of novelty or knowledge.

Sichel and Eckstein (1974) explain the importance of epistemic value in economics:

Diminishing marginal utility is an expression of the 'variety is the spice of life' philosophy of most individuals--that people prefer to have one or a few of a lot of different goods and services rather than a great many of only a few goods and services. [Sichel and Eckstein, 1974, pp. 128-129]

The combination of information and epistemic value has important economic implications. In addition to the increased well being of the novelty or knowledge seeker, there can be effects on demands for goods other than information because of exposure to information. For possibly dangerous goods, safety information may serve to increase the epistemic value of substitute goods. Consumers may want to learn more about computer security software, for example, because of exposure to information about the danger of viruses.

A fifth consumption value is conditional value. According to Sheth, Newman, and Gross, "The conditional value of an alternative is derived from its capacity to provide temporary functional or social value in the context of a specific and transient set of circumstances." Ice cream on a hot day is often used as the classic example. The timing of information can clearly influence its conditional value as in the case of a firm that misses orders from consumers who need product in a hurry. Seasonal demands for many types of information may be based in part on their conditional value.

The last two components of value are from the resources economics literature. According to Pearce and Turner (1990), "Total economic value = Actual use value + Option value + Existence value" [Pearce and Turner, 1990, p. 131]. Actual use value is embodied in the five sources of consumption value already discussed.

Existence value covers cases when there is value for the existence of something (such as a wilderness area or other public good), whether or not the economic agent doing the valuation will ever derive any benefit from visiting the wilderness. In essence, existence value is "unrelated to any actual or potential use of the good" [Pearce and Turner, 1990, p. 134]. For some firms, the mere existence of a web site or information data bank has existence value as well, regardless of any actual or potential use. Pearce and Taylor admit, "existence values are certainly fuzzy values" for which "it is not clear how they are best defined" [Pearce and Taylor, 1990, p. 131].

While the motive behind existence value in the natural resources literature tends to be altruistic or sympathetic, this need not be the case for information to have existence value. There may be feelings of safety and security that come from the existence of some kinds of information. Sympathy for the so-called "digital gap" between information "haves" and "have-nots", for example, is used as a justification for various policy levers. Further, concerning electronic privacy, there may be an existence value to not having certain information about oneself made public over the Internet, etc.

Finally, option value captures the bequest and gift motives of future consumption, along with vicarious value obtained by someone else's use. "The Quasi Option Value (QOV) is the value for preserving options for future use given some expectation in the growth of future knowledge" [Pearce and Taylor, 1990, p. 134]. The willingness to pay for information that might help a firm in the future would be an example. Hypercommunication redundancy is another example, where a second telephone or Internet provider is selected as a backup that may never be used.

Information's value must be balanced against its cost. Roger Noll mentions in 1996 that there is a balance to information value between information producers and consumers.

Putting barriers between potential users and the creator of the information is to limit the degree to which economic value will be derived from it. The other side of this dilemma is that, if no mechanism is in place for the inventors, or producers, or publishers, or disseminators, of the new information to recapture their costs, then people will not produce as much information as is socially desirable. [Noll, 1996, p. 39]

An additional point about the cost of information Noll makes is related to hypercommunication technologies. He notes that many of the "first copy" costs of publishing scientific journal articles, for example, are "independent of the medium" [Noll, 1996, p. 41]. Yet, "other kinds of fancy electronic publication possibilities vastly reduce the cost compared to print" [Noll, 1996, p. 41].

In conventional economic demand models (whether consumer or business-to-business), the assumption is that all parties possess some kind of information parity. Should the seller, buyer, or both have "limited" information about pricing, quality, or competitors, the full information assumption must be relaxed and uncertainty results. Information is said to be asymmetric in such cases because the amount or quality of one economic agent's information is greater than another's. Stigler's contribution initially came from his recognition that information seeking is not a costless activity. Baumol and Blinder summarize and extend the point:

Neither firms nor consumers have complete information because it would be irrational for them to spend the enormous amounts needed to get it. As always, the optimum is a compromise. One should, ideally, stop buying information at the point where the marginal utility of further information is no greater than its marginal cost. With this amount of information, the business executive or the consumer is able to make what have been referred to as 'optimally imperfect' decisions. [Baumol and Blinder, 1991, p. 621]

However, it is not necessarily patently obvious to a business executive or consumer how much information should be purchased with available time or money. Often, an individual will rely on an agent for information, leading to a possible principal-agent problem.

Though hypercommunication allows firms to gather more information at much lower fixed and variable costs than ever before, there are new costs involved. Labor costs, such as those needed to train or hire information literate workers may actually be higher in the information age. Certainly, both relative and absolute costs of communicating and processing information have fallen in many ways. However, the tradeoff between costs and benefits of information is probably even more important now than ever before. Furthermore, the quality or competitive superiority of information rather than the quantity is most important economically.

2.4.5 Information as a Public Good

The degree to which information is a public good is central to its economic role. It is a matter of degree instead of a binary state as Lamberton points out:

The answer will depend on whether we are dealing with all purpose information or are being more practical, recognizing that there are many kinds of information. [Lamberton, 1996, p. xxiv]

It has, he goes on, "traditionally been regarded as a public good" because it is indivisible, non-rival, and non-excludable [Lamberton, 1996, p. xxiv].

However, unlike, air or national defense information has weightless, idiosyncratic characteristics as Macdonald (1992) states:

Information is different from other economic goods; unlike them it cannot be displayed to a potential buyer, otherwise he will possess without having to buy. So those who buy information are always uncertain of precisely what it is they are buying. So imperfect is the market for information that price alone may determine demand, with information unwanted when it is sold cheaply and the same information in much demand when it is expensive. Consultants may sell a report much more easily than academics may give away the same report. Exclusivity generally increases the value of information to the buyer, but as information may be reproduced at little cost and always remain with the seller anyway, exclusivity can be hard to guarantee--information tends to be of little use in isolation; the buyer seeks to purchase only that information which is compatible with that he already has--information itself may be a non-perishable good, but its value to many customers tends to be extremely time sensitive. [Macdonald, 1992, p. 55]

In agriculture, for example, both weather forecasts and crop reports are examples of information that is a public good. If a Florida citrus processor downloads weather forecasts from the NWS (National Weather Service) website, that does not prevent a New York futures trader from doing the same. In that sense, the weather information is non-rival; the citrus processor's use of the information does not prevent the futures trader from using it also. One of the characteristics of a public good is non-rivalry.

Another characteristic is the non-excludability, or the fact that the NWS cannot control how the weather information is used or who uses it. Hypercommunications allow such information to be distributed to a larger audience and gathered from a wider number of sources, for a lower variable cost than in the pre-convergence world of hypocommunications. Furthermore, public discussion of such information on Internet newsgroups, for example, can also help individuals weigh the value of existing information.

Information literacy skills allow raw information such as weather forecasts (a public good) to be more efficiently transferred into proprietary market intelligence (a private good) subject to behavioral constraints of processing and diffusion. Information literacy skills within a firm's labor force allow the information processing task to be internalized, so that the firm can substitute cheap access to information for costly purchased information inputs.

Information that has been processed using information literacy skills (especially as part of a business' information processing strategy) is excludable and hence a private good. See Porter and Millar (1985) for an early example of strategic implications for firms. For example, if weather data are reported on the citrus processor's website and re-arranged by crop region or county, they have been processed but in a cursory and non-excludable way. Suppose the futures brokerage employs an information super-literate who correlates National Weather Service data with ENSO (El Niño) data to form a statistical model of freeze damage that accurately predicts prices. Then, the information processing yields an excludable set of information (temporarily). Unless the citrus processor is given or sold the results of the model (together with their interpretation), he is excluded. It makes no difference if all of the raw data was obtained essentially at no charge from the Internet by both organizations. Even if both organizations used hypercommunication services to access the data and computers to process it, only one used information literacy to create a private information "good" from public good inputs. However, this advantage can be short lived, depending on the form of market efficiency.

Note that there is little difference between the weather information example and the definition of technology itself, except for one detail. IT and hypercommunications technology were used to gather and process information and add value to it. It could be said that the difference between information and technology depends on context. For example, hypercommunication technologies may deliver an information input that is scientifically unrelated to agribusiness, for example. However, better hypercommunications technology allows the agribusiness to enlarge its production set or market more efficiently using agricultural sciences.

2.4.6 Asymmetric Information

Inherent to the excludability of information is the asymmetry of information, or "situations where one economic agent knows something that another economic agent doesn't" [Varian, 1992, p. 440]. In a two-agent situation, there are four cases: both parties have limited information, the seller alone has limited information, the buyer alone has limited information, or both parties have full information. Theory suggests that information can provide competitive advantages and even a high degree of market power. Such advantages depend on numerous factors and may tend to be short lived [Carlton and Perloff, 1994].

One reason protocols and standards in hypercommunications (as well as grades and standards in agriculture) exist is to protect an inherently weaker party to a market transaction from fraud due to asymmetric information. Specific examples of how such standards work will be found in 4.5.2.

Note that asymmetric information does not necessarily convey asymmetry in bandwidth (more precisely, data rate) or vice versa. The term asymmetry is used two ways in hypercommunications. First, to identify the degree to which a given call or session is two-way (as in synchronous or asynchronous circuits). For example, most pagers currently on the market are strictly one-way, receiving rather than transmitting information. Many telephone calls such as those on most speakerphones do not allow parties at each end of the conversation to speak at once or continuously interrupt (as in full duplex, half-duplex, or simplex hardware). Second, asymmetry for hypercommunications carriers depends on whether download and upload bandwidths are equal. So-called 56k modems have a download capacity of 56 kbps, but an upload capacity of 33 kbps.

It may be that companies achieve a degree of market power on the WWW (World Wide Web), for instance, depending on the interactivity and capacity of their websites. A dynamic or visually spectacular website one usually requires the transmission of more text and binary content to viewers than a static one does. However, site viewers send approximately the same number of characters regardless of whether they receive plain text or high-resolution graphics. Hence, web designers face a tradeoff between using high-bandwidth sound, images, and video and the reality that a sizable proportion of web surfers have download speeds of 56 kbps or less. Hence, the information itself may or may not be asymmetric, but the transmission is. However, this technical asymmetry can lead to information asymmetry.

In addition to the transmission context, asymmetric information can have important economic repercussions for hypercommunications sellers with far better information than their customers or competitors about services, technologies, and system reliability. Hypercommunications pricing depends on the economic asymmetry of information at least as much as it does on asymmetry of information in a bandwidth sense. The sheer complexity of services, technologies, options, and pricing plans can give a hypercommunication supplier a considerable edge over the buyer. This point is revisited during the discussion of unlimited complexity in 2.5.2. Comparison-shopping is hindered through bundling and unbundling of services, and by price policies that vary from metered to measured to unlimited access.

2.4.7 Other Economic Properties of Information

Information has several other important economic properties. First, information has value in reducing risk or uncertainty that depends on the risk aversion of firms. Information has several properties that reflect on all the conceptualizations of information given. First, for agribusiness, the value of information is related to its use in reducing uncertainty in the decision-making process. (FAO, 1986, p. 42) Three kinds of information are usually named: normative, positive, and prescriptive. Normative information reflects values and social norms to yield what "should" be done from a right-wrong continuum of beliefs. A second type, positive information, is meant to be an objective representation of facts, conditions, and other factors independent of any moral imprimatur. A third kind, prescriptive information, uses both normative and positive information to arrive at a prescription or decision via decision rules. Decision rules such as compromise, consensus, laws, and physical coercion combine with imperfect positive and normative information to yield prescriptive information that guides decision-makers.

Second, information's use in organizational decision making is closely related to horizontal and vertical integration, organizational shape, and organizational form. Focuses of individual agents tend to be narrow, such as personally obtained knowledge of available communications options, instead relying on a group or team approach (in business) that could create principal-agent problems. The principal or corporate executive in charge of hypercommunications often relies heavily on advice from agents representing competing interests within the company or agents outside the firm.

For example, a web site, domain name, and ISP may be seen differently by the marketing department compared to the vision conceived of by the MIS department. There is considerable risk that technology will be employed inefficiently and/or ineffectively if participants have conflicting objectives, unequal information, or are not all stakeholders in the outcome. However, this may be simple specialization because as Baumol and Blinder admit:

Obviously, if participants in the market are ill-informed, they will not always make the optimal theoretical decisions described in our theoretical models.

Yet, not all economists agree that imperfect information is really a market failure. They point out that information, too, is a commodity that costs money to produce. Neither firms nor consumers have complete information because it would be irrational for them to spend the enormous amounts needed to get it. [Baumol and Blinder, 1991, p. 621]

Therefore, seemingly narrow focuses of individual agents may be evidence of properly functioning markets rather than proof of market failure due to asymmetric information.

A third economic property of information is more insidious because it becomes the prize in a zero-sum game, but without one party knowing the rules of the game or that they have lost personal information to the "winner". For example, customer profiles (including the "cookies" that surreptitiously gather information about each computer household in an audience) are one way of avoiding respondent burden and customizing information, but at a possibly invasive cost if "spammers" use cookie data. Therefore, security, privacy, source credibility and information overload may become more important issues than ever before, as will the technological literacy of the public. Information may have plainly or hidden negative value in many cases brought about as the hypercommunications model replaces the interpersonal and mass models.

2.5 The Frontier of "Unlimited" Communication

Librarian of Congress emeritus Daniel J. Boorstin was asked in 1997 to name the ten leading ideas of the second millennium that have shaped Western civilization and world history. The ninth of these, "unlimited communication", he describes as:

beginning with printing and the rise of literacy (since the 15th century) and electronic communication (since the 20th century). Reliance on greater communication, leading to a more universal awareness of the human condition, problems, and opportunities. [Boorstin, 1997, p. 33]

It would not be difficult to cite other, more excited proclamations that hypercommunications is ushering in a new cyber frontier with the possibility for greater wealth and welfare worldwide. However, the implications of the new cyber frontier of unlimited communication are not typically broken down for agribusiness.

The section has two parts. First, there is a discussion of the differences and similarities between the unlimited frontier of communication and information and the historical experience with land as a productive factor in agriculture (2.5.1). Second, other aspects of unlimited communication (of a distinctly limited nature) are covered (2.5.2).

2.5.1 Unlimited Frontiers: Land vs. IT

Assume that more is better (up to a saturation point) when it comes to being able to communicate. If hypercommunications market prices fall low enough to enable "unlimited communication" relative to the present, it could mean society's knowledge base would grow at an ever-increasing rate. However, unlimited communication can be an unlimited bad if individuals and businesses are bombarded with telemarketing calls, spam, and security problems such as viruses, privacy intrusions, and identity theft.

The popular press is full of hype about the unlimited possibilities of the information economy. However, whenever economists hear a term like unlimited, it becomes more than an academic question as to how to evaluate the unlimitedness of something to find constraints. After all, if the astronomers are right and there is a finite end to the Universe, it would seem as though everything has a limit. However, limitations on information, human knowledge, wisdom, and technology do not come from physical science textbooks in neat mathematical form as they perhaps could in astronomy. Many argue that the mind is limitless judged in comparison to the physical plants of the previous manufacturing age. Yet, the information age has limitations. While these limitations are different (albeit more elastic than those of the industrial or manufacturing economy), the weightless information economy still has them.

Agricultural economists are particularly suspicious of claims of an unlimited frontier, because of America's historical experience with land. Indeed, the historical comparison between the (land) frontier and the cyber frontier is instructive. As the United States went from independence through the Civil War and beyond, it seemed as if land and natural resources were unlimited. The United States had an agricultural frontier from colonial days up until World war I. Much of the economic growth seen by the United States from 1800 until 1900 was simply due to the bringing of new land into cultivation. According to Hughes, "Until 1910, it was increased land input with a fairly constant yield per acre that enabled the great growth of Northern agriculture to occur" [Hughes, 1990, pp. 287]. Eventually, the settlement of marginal areas showed economists that agricultural output could not increase at a faster rate than new acreage was brought into production. Agricultural productivity increased when Vermont farmland was abandoned in favor of Iowa land [Hughes, 1990, p. 287], but once settlement occurred on the marginal "unlimited" frontier lands of the western plains, the average productivity of (unlimited) land began to fall. Therefore, parts of the country were not settled, not because the pioneers somehow missed them, but because the cost of producing crops on new land was greater than the economic value of the expected profit stream.

Coincident with the early 1920's was a major recession that seriously plagued many agricultural areas of the country, with a few notable exceptions--those lands that were still frontiers. Florida was one such place then, a frontier boom land. A warm climate and large supply of vacant land suggested that productive new areas were ripe to be opened up for agriculture. According to land sales claims, the possibilities were (locally) unlimited. Florida began the decade with 968,000 inhabitants, with over 60 percent of the population in rural areas (un-incorporated places or incorporated ones with under 2,500 people). Had the speculative "land bubble" of the 1920's (which caused over-valued Florida real-estate prices to plunge) not burst, population growth might have exceeded the half million new Floridians added to the state's ranks in the 1920's [Marth and Marth, 1992, pp. 50-51]. Because of the historical experiences with the western frontier and Florida in the 1920's, it is hard to conceive of a truly unlimited frontier.

However, unlimited communication may be entirely different economically from the familiar agricultural production problem in that new land and new technology differ. As new, previously un-farmed land is brought into production, agricultural output rises, but at some point, the cost of farming new acreage is greater than the profit it will bring. This point is where the "unlimited" land frontier faces a binding economic constraint. However, unlimited communication and the cyber frontier are different because:

First, the accumulation of an intangible such as knowledge is not subject to any physical bounds. Moreover, there is nothing in the historical evidence to suggest that humans are exhausting the potential for advancing knowledge. [Grossman and Helpman, 1991, p. 17]

Schumpeter [1943, p. 118] wrote about how the cultivation and development of new land differs from the cultivation and development of new technology. With land, the best lands are taken first, leaving marginal acreage that is bound to run into diminishing returns. New technology differs Schumpeter argued because:

we cannot reason in this fashion about the future possibilities of technological advance. From the fact that some of them have been exploited before others, it cannot be inferred that the former were more productive than the latter. And those that are still in the lap of the gods may be more or less productive than any that have thus far come within the range of observation. [Quoted in Grossman and Helpman, 1991, p. 17]

Limitations on the information economy may not be conventional, but that does not imply that there are no limitations at all. Instead, physical, mental, and psychological obstacles to unlimited communication translate into limitations on economic behavior.

Increasing returns to scale are often seen as the source of the unlimitedness of communications and information technologies. As mentioned in section 2.3, increasing returns to scale is one reason the information economy seems to behave differently than the industrial economy. Often, however, far from being a source of unlimitedness, those returns are increasing only within some relevant range due to inevitable physical constraints. Varian (1986, p. 319) supports this view:

What would be an example of a technology that had increasing returns to scale? One nice example is that of an oil pipeline. If we double the diameter of a pipe, we use twice as much material, but the cross section of the pipe goes up by a factor of four. Thus, we will likely be able to pump more than twice as much oil through it.

Of course, we can't push this example too far. If we keep doubling the diameter of the pipe, it will eventually collapse of its own weight. Increasing returns to scale usually just applies over some range of output. (Varian, 1986, p. 319)

Increasing returns to scale are traceable to the economics of networks, the non-conventional behavior non-rival inputs, and other aspects of "unlimited" communication. However, the very non-convexities they bring to production do not automatically translate into economies of scale in the same way that homogenous technologies would. The same goes for increasing returns to scope, span, and system. Economic limitations still occur when technological limitations do not appear to. However, as Leijonhufvud states, economists "lack a widely-accepted theory of pricing under increasing returns, lack a convincing model of how competition operates between firms with increasing returns, and lack a micro-founded theory of income distribution" for increasing returns as well [Leijonhufvud, 1989].

The next section takes this reliance on unlimited communication further by showing how popular myths of an unlimited cyber age do not mean the end of economic limitations. Even if communication has reached an unlimited frontier due to technology, the information economy is subject to constraints that stem from human economic behavior.

2.5.2 Other Aspects of "Unlimited" Communication

The information economy and communications may seem unlimited but each has many limiting factors. This section will discuss six limitations to unlimited communication that are (or are likely to become) binding economic or technical constraints: unlimited data through boundless bandwidth, unlimited complexity, limited audience, limiting geography, unlimited time through compressed time, and limited time. Each could slow the speed of unlimited communication.

2.5.2.1 Unlimited data through boundless bandwidth

In science and in business, there has long been a difference between information and data. An analogy might be mineral exploration where the copper (information) was mined from the ore (data). There is great value in the exchange of information, assuming at least semi-strong form efficient markets (Fama, 1970). However, just like a mine operator in an El Niño year, a firm can be buried in an information overload, just as easily as an overload of water that floods tunnels and suspends production. Just as an individual user may become bogged down in a morass of un-read, un-answered e-mail, so too an e-agribusiness can be buried in un-answered, un-answerable, misrouted, waylaid, or intercepted e-mail. This can be true of any form of hypercommunication messages, but it is most clearly seen in the Internet context.

This point reintroduces the idea of attentional economics along with logistical and attitudinal challenges. A Florida farm, ranch, or rural organization may try to use the Internet to communicate very cheaply in personalized detail with their current customer base, suppliers, employees, or potential customers. However, the organization must be ready and willing to answer e-mail (at least to acknowledge receipt of the message) instantly if possible. The rapidity of response will depend on the nature of the business. An agribusinessperson can now call, e-mail, or fax clients from home, office, sod farm or golf course. However, he or she may use technology to "screen" increasingly annoying communication interruptions and avoid interpersonal contact, thus annoying customers.

Much hypercommunication centers on an open architecture's free inflow and outflow of data. Unlike a physical ruby mine where guards can be posted to prevent intruders and thieves, valuable information travels over bandwidth in fibers and through spectra without a foolproof way to guard gems of information. Unlike excludable precious gems, however, stolen information does not have to vanish from a vault for the theft to have occurred. Hence, it can be possible for theft to occur without the owner's knowledge.

Unlimited data also result in vast flows of unwanted or unneeded information. It is almost costless for a marketer to "spam" us, set a "cookie" up, or for hackers to crash a communication network. Not everyone wishes to see Playboy TV or the Trinity Broadcasting Network. They can try to filter out the noise or demand that their carrier or government do so. However, generally, if a firm wants the benefits of information, it must pay the cost of filtering it and securing its value. The unlimited data phenomenon is one result that filtering seeks to avoid. Only knowledgeable customers, government agencies, or protocols can reduce the risks to firms that connect to an ISP.

As the price of delivering bits of data falls and technology increases the available bandwidth that carries the data, it appears as though bandwidth is boundless. However, filtering and security require that limits be placed on hypercommunication even if it is technically possible for super exponential growth to continue. Just as in ordinary personal communication, a reasonable person would be expected by economic theory to behave in a guarded manner. However, the degree of guardedness is a tradeoff between costs and benefits.

2.5.2.2 Unlimited complexity

Northern Telecom's memorable waitress series of television commercials illustrate the complexity of converged technologies or "Power Networks" in Nortel's parlance. Power networking is inherently difficult to comprehend because it is the technologically complicated merging of voice calls, data networks, Internet traffic, e-mail messages, and other services. The commercials depict hard-pressed corporate decision-makers at a diner. When one of the young business people admits to confusion, a waitress knowledgeably discusses fine points of power networking in dense jargon, taking her customers' breaths away. In fact, Nortel's definition of power networking is an operational definition of hypercommunications:

Power Networks extend beyond the integrated wide area network to include an organization's call centers, the Internet, intranets, enterprise mobility, multimedia communications applications, and telephony environments. With Power Networks, these applications, and the rest of an organization's communications infrastructure, are holistically integrated to improve business performance. Companies can no longer afford to configure and implement these technologies independent of each other. [Nortel, 1997]

However, this is not an inherently simple "make or buy" business decision. With such a variety of terminology, jargon, and especially, acronyms, the assumption of basic technology literacy on the part of both consumers and producers can be questionable.

Hypercommunications transactions often occur in spite of the fact that neither the buyer nor the seller is completely familiar with every aspect of services and technologies. Yet, there is often universal feigned understanding. There can be an embarrassment cost or risk of job loss from appearing not to know what a particular term means. There can be confusion due to asymmetric definitions of terms, or based on differences in values, beliefs, expectations, educational level, or age group.

Many terms have multiple definitions to emphasize that most (if not all) hypercommunications words have at least two operational definitions. For many terms, there is one set of definitions in print, another circulating in cyberspace, and still another set under various stages of proprietary research and development. Furthermore, the definitions are not all frozen in time or scope even if a transaction's contract is. Frequently, old meanings are used for hypercommunications words with evolving new potential.

As with any complex product or service, however, the market is able to function without each agent having a full and complete knowledge of the inner workings of the tiniest technical mechanism. Frequently, however, one better informed market player is able to use information and knowledge about such inner workings to his advantage over some time. The degree and duration of such an information advantage depends itself on how the underlying market is defined.

An important source of complexity comes simply from sheer communications volume. Whittaker and Sidner (1997) studied business e-mail users. They found that e-mail was used for more tasks than communications alone, noting that task management and personal archiving were particularly important unexpected tasks of e-mail in business settings. The term e-mail overload describes non-communications functions of e-mail:

E-mail overload creates problems for personal information management: Users often have cluttered inboxes containing hundreds of messages, including outstanding tasks, partially read documents, and conversational threads. Furthermore, user attempts to rationalize their inboxes by filing are often unsuccessful, with the consequence that important messages get overlooked, or "lost" in archives. [Whittaker and Sidner, 1997, p. 277]

In the study, inboxes comprised 53% of total e-mail files, with the mean number of inbox items at 2,482 with an average of only 858 "filed" items [Whittaker and Sidner, 1997, p. 281]. Filing is hard for people because they cannot remember where information is filed.

Thus, unlimited complexity can also be a source of unlimited headache. Software and hardware crashes and e-mail overload are but the tip of the iceberg. Firms may have to re-train employees as new technologies are added to keep up with increasing communication volume, losing valuable time. Switching costs include the length it may take to learn how use new services or to operate new software or hardware, as well as lost revenue due to technical and human glitches that alienate customers.

2.5.2.3 Limited Audience

To have unlimited communications, even if state-of-the-art hypercommunication technologies offer an unlimited number of ways to communicate, there must be an unlimited need to communicate. Often, commentators are swept up by excitement about the tremendous positive impact of technology that they forget about impediments to widespread adoption of new technologies. Many of these such as attitude toward technological change, aptitude for learning new skills, and the time it takes to diffuse innovation have already been covered. These presuppose that there "should" be a present or future demand.

Even if these limiting factors are removed, unlimited communications face a limited audience. The audience is limited by population, infrastructure access, language, attention, attention span, educational level, and a host of other factors.

2.5.2.4 Limiting geography

A fourth point concerns the inequitable geographical distribution of new hypercommunication technologies. This can occur because of regulatory barriers, regulatory inducements, and a variety of short-run adjustment mechanisms. Most importantly, the geographical distribution of new technologies depends on the actions of suppliers based upon the cost of infrastructure upgrades. Infrastructure penetration will also affect land prices and the rate of transition from agricultural to urban use in Florida.

In parts of urban and suburban Florida, firms such as BellSouth, GTE-Bell Atlantic, AT&T-MediaOne-Roadrunner, Time Warner, and Sprint offer a variety of hypercommunication services ranging from cable television to local telephone service to high-speed Internet. But BellSouth and Florida's other ILECs (Incumbent Local Exchange Companies) are prohibited by federal and state regulators from offering some services their competitors are permitted to provide over the ILEC's existing wire and switching networks. Currently, fewer than twenty percent of COs have DSL statewide and less than 0.5 percent of cable customers have fiber optic lines.

A high-speed hypercommunications infrastructure may never reach parts of rural Florida unless land use changes to residential development because a greater population density may be needed to cover the fixed cost of developing service. For example, Disney's new small town of Celebration, Florida (located just south of Disneyworld in Osceola County) offers enough hypercommunications amenities that Yahoo! named it Florida's most wired small town. However, it is unique among smaller communities.

Two implications that are crucial for agriculture flow from the inequitable geographic distribution: land prices and transition of land from agriculture to urban and suburban uses. Hypercommunications availability is a land-related factor of production for agribusiness. In his 1958 Yearbook of Agriculture article, "Oranges Do Not Grow in the North", Ronald L. Mighell wrote:

farmland can be used for more different purposes than ever before. The properties inherent in the land are now less restrictive, and other resources determine oftener what the most economic use shall be. The characteristics of land nevertheless still set limits that influence the broad patterns of agriculture. The successful farmer is the one who learns how to cooperate with the natural and biological processes that are linked to land. [Mighell, 1958, p. 404]

Mighell then divided land's physical factors from its economic factors. One of those economic factors was "communication facilities".

Land can become more valuable with a high-speed hypercommunications infrastructure "to the curb". Land transition from production agriculture to urban uses could be particularly affected. The cost of holding the land in agriculture is already reduced because of advantageous tax treatment, but the value of land tends to appreciate due to infrastructure improvements. Hence, infrastructure improvements create a tradeoff between rising in real estate values and increases in the return per acre to agribusiness. If returns per acre rise faster than real estate values due to infrastructure improvement, farmland is more likely to remain in agriculture. However, this result is limited by the geographic extent of infrastructure development.

2.5.2.5 Unlimited time through compressed time

Another brake on unlimited communications is time. Time is a limiting factor although computer processing and hypercommunications connection speeds are faster than ever. There are still only 24 hours in a day. Regardless of how fast the information economy and the businesses inside it move technology cannot create extra hours, extra days, or extra years. New hours, minutes, and seconds are not created by a digital clock. However, information technologies perform tasks in seconds that previously took hours.

This introduces the concept of time compression. The ability to get more done in less time can shorten decision periods or the length of run, an important aspect of the influence of time on economics and vice versa. Recall Persky's 1983 discussion of four periodicities the VSR (where all inputs are fixed), the SR (where at least one is fixed), the LR (where all inputs vary) and the VLR (where technology alone varies). The length of run is often considered constant across firms and not influenced by technological change.

There are two economic implications of time compression. First, unlimited information (updated frequently) can give firms the ability to make better-informed decisions more quickly. Secondly, at the industry and macro level, time moves faster as the result of unlimited communication. Investment decisions are made more swiftly, inventories are managed in real-time, cash flows into and out of the firm can be faster, and delay-sensitive transaction costs fall.

For these reasons, a new method of measuring time, Internet years, (sometimes likened to dog years) can be thought of in recognition of these reasons. Perhaps the problem can be likened to the idea that dog years do not equal human years. Suppose that the fast-changing nature of the information age was recognized by a calendar change where one Internet year equals one-seventh of a regular year. Technology-savvy firms that used hypercommunications to their full advantage would be up to seven times faster to adjust to changes in technology relative to their time-retarded competitors. Then, the way firms plan, budget, and make decisions would be altered if their thinking replaced one decision period with seven decision periods (each with the importance of one old year). The firm would be subject to new discount rates and opportunity cost of growth based on compressed time returns to high technology.

An important question is how varying relative periodicities affect information asymmetry and empirical measurement. One example of this is seen in contract lock-in, a possible disadvantage for buyers in the market for bandwidth. Often, hypercommunications suppliers sell bandwidth only through annual or multi-annual contracts where unit rates fall as terms rise. By locking customers into a long-term contract (such as a three-year term), a carrier can use time compression and fast-changing market conditions to its advantage while the buyer cannot.

The impact will depend on whether service is priced on a metered, measured, or unlimited access basis. Buyers cannot adjust to sudden network bottlenecks by buying from another supplier unless they have a redundant hypercommunications provider. A buyer may be stuck with too much bandwidth but be forced to pay for unused amounts in a measured or unlimited access plan. Alternatively, a buyer may require more bandwidth than the measured plan allows and have to pay premium rates for a particular transmission, time of day, date, week, or month. Furthermore, a buyer may be unable to take advantage of cheaper, more efficient hypercommunication services due to the contractual arrangement or be unable to replace broken equipment with state-of-the art hardware if it is incompatible with the contracted carrier's network.

Time compression and relative differences in decision periods among firms or between markets can cause a phenomenon known as data interval bias [Bass and Leone, 1983; Vanhonacker, 1983; Bass and Leone, 1986; Kanetkar, Weinberg and Weiss, 1986]. Data interval bias refers to the fact that coefficients from empirical models such as the elasticity of demand depend on time aggregation and the periodicity or data interval. For example, a separate conceptual model exists for weekly, monthly, and quarterly demand [Pasour and Schrimper, 1965; Sexauer, 1977; Capps and Nayga, 1990]. However, along with aggregation issues, time compression adds the problem that recent periods are shorter in length than distant periods before aggregation.

The econometrics and marketing literature and practice traditionally were limited to several standard regression assumptions [Kennedy, 1993, pp. 1-9]. These include: linearity, unbiased intercept, disturbances have same variance and are uncorrelated (or the absence of heteroskedasticity and autocorellation), fixity in repeated samples, no exact linear relationship among independent variables (no multicollinearity), and adequate sample size to estimate. An explosion in recent literature has led to investigation of increasingly complex models that bolster and batter statistical theory but often follow Goodhart's Law (1978) to certain breakdown when put into practice. Time compression due to technological change could be modeled through acceleration in rate of change variables using a changing coefficients model while keeping traditional equivalent periods.

Theoretically, a built-in data interval bias can be found for exponential and population growth models, regional economic models, financial analyses, or economic cost-benefit analyses. Each of these requires equivalent time periods and a constant compounding or discount rate. Under time compression, either the length of data interval, the discount rate, or both would change over the period of analysis.

In a practical sense, when modeling hypercommunications, sources of data interval bias are easily found. First, time compression is associated with better, more timely information, the limit of which is "perfect" real-time information. Wholesale hypercommunications prices can change in continuous real-time, by the hour, day, or month, though retail sellers lock in prices using contracts. A measured of demand sensitivity to price could be hard to obtain. As the progression to a unified hypercommunications market continues, demand data from a variety of markets and periodicities could be needed. Since elasticities vary based on the data interval, the selection of the "best" interval for aggregation can be subjective since theory provides no rule. One rule-of-thumb is that the "best" period is the one that most closely corresponds to the decision period that has sufficient variation to produce useful or even estimable estimators [Hanssens, Parsons, and Schultz, 1990]. Empirical analyses will have to follow an array of changing products, firms, industries, pricing structures and measures in both time series and cross-sectional data due to rapid changes in the fast-paced hypercommunications market.

Agriculture is often seen as a unique sector for a number of time-related reasons: crop cycles, animal cycles, the relationship between storage and time, and shrinkage due to seasonal weather conditions. Hypercommunications bring another important set of time-related aspects. First, real-time information and control will allow farmers to track prices, weather, and field conditions more closely. Second, new technologies such as biotechnologies can be internalized faster. However, slow speeds of adoption in particular agribusinesses and lags in infrastructure deployment in rural Florida could mean some firms would be handicapped in relation to competitors in states with superior infrastructures or who adopt hypercommunication technologies the earliest. Hypercommunications are inputs that can help firms change internally more rapidly either by speeding up the rate of internal innovation or by lowering response times to external changes. Additionally, costs for waiting, adopting early, adopting technologies too late, and adopting the wrong technology could be compounded more severely. Product life cycles are jumbled or sped up.

To Florida agribusinesses, even if technical limitations are not binding, time will tend to delay the onset of unlimited communication. Boorstin's "unlimited" communication will not instantly arise from hypercommunications convergence anymore than Gutenberg's printing press resulted in instant literacy. The hypercommunications market is springing up in phases. In the first phase, telecommunications technologies such as telephone, Internet access, voice mail, personal paging, cellular services will converge into a smaller set of separate services. Eventually, existing services and technologies will be joined by yet-to-be-conceived services and technologies to become a unified system or network. However, it is critical to note that the unified system or network (the infrastructure) and the development of new services and technologies are investment activities that must take place before convergence can occur.

Hypercommunication technologies and services may have successively fewer technical limitations as time passes so that it appears possible that communications can truly be unlimited. However, even if technical limitations are not binding, there are many economic limitations to unlimited communication. Therefore, the cyber frontier is no more "unlimited" than the western frontier was in land rush days. However, the limitations on land differ from those on communications and information because the latter two are economically weightless and cannot be used up. However, human beings have physical, attitudinal, and sociological limitations that prevent truly unlimited communications from being a reality. During Chapter 3, economic and technical foundations of the hypercommunications network will be seen as powerful sources of the growth of the information economy. It can be tempting to label them as the source of unlimited communication. However, to renew an ongoing theme, the technical properties of a technology do not automatically imply a particular economic result.

2.6 Summary

This chapter has focused on three foundations of the information economy: communication, technology, and information. Importantly, the three are interrelated in a composite way. Communication requires technology to transmit information. As that technology progresses, the volume of information communicated can rise above the ability for it to be absorbed, processed, and patterned into a more valuable form of information to be communicated later. New technologies arise at a faster rate when information can be communicated to more people at a cheaper price and more quickly. When weightless information and communication inputs and outputs are mixed with the organizational structures of modern agribusiness and new biotechnologies, the result can be difficult to analyze using conventional economics. However, a completely new economics is not required to catch up to technological changes that stem from better communication networks in order to understand the information economy. Instead, a managerial economics approach that focuses on managing innovation based on clearly defined real-world economic problems provides an economic approach that remains relevant.

With this philosophy in mind, an example may help clarify the application of non-homothetic technologies to the agribusiness information economy. An agribusiness may produce and distribute dozens of goods and services ranging from food ingredients and the transportation of vegetables to branded frozen or fresh meals. In order to coordinate these activities, an amount of corporate intelligence must be produced along with an ongoing capacity for producing it. Information processing is one input required to produce corporate intelligence, but the process requires other inputs such as communication. However, information processing and communication are also essential inputs in producing, marketing, and managing goods that are actually sold in the marketplace.

The problem is that the technological properties through which information and communication combine to create corporate intelligence do not necessarily correspond to their total economic reverberations through the firm. Information and communication are interactive, non-allocable factors. The amount of information processed and used to produce corporate intelligence cannot be distinguished from the amount used to carry out other activities of the firm. Furthermore, corporate intelligence itself is jointly produced with virtually every other output of the firm.

Hence, the inherent technical returns due to information and communication technologies may not automatically map into economic returns. Hypercommunications is a technological advancement that has important implications to Florida agribusinesses. This chapter has covered three foundations of the information economy using economic theory to discuss their general importance to all businesses. After Chapter 3 covers the economic foundations of the hypercommunication network, the question of why hypercommunications are needed in agribusiness should be clearer both from economic and technical perspectives.