The Twilight of Sovereignty: How the Information Revolution is Transforming Our World

Wriston, Walter B.

Chapter Six

Where We Stand

Chapter Six

Where We Stand



"A...useful and somewhat surprising lesson of historical scholarship is that widely accepted facts are often wrong."

George Stigler,

WE ARE SO ACCUSTOMED TO THE VARIOUS STANDARDS OF measurement we commonly use that we seldom stop to consider either their validity for today's world or their history. We have a surfeit of numbers quantifying every aspect of life. This was not always the case. The historian Fernand Braudel tells us for example: "Nobody knows the total population of the world between the fifteenth and eighteenth centuries ... the figures are few and not very reliable. They apply only to Europe China...What about the rest of the world? There is nothing in fact on non-Chinese Asia, outside Japan."[48] 

The measurements of time and distance we use today evolved slowly, and each refinement often ran into resistance from those with a vested interest in the familiar measures. Time is a good example. The Egyptians developed one of the first successful measuring devices when it occurred to them to use the sun's shadow to measure time. The result was the first crude sundial. With admirable, if misplaced, logic they located the hour markings equidistantly from each other. We now know that this actually produced


hours of uneven length, varying with the seasons of the year. But since there were no alternative measuring devices, people became accustomed to relying on these somewhat eccentric instruments and were convinced they were accurate.

About five hundred years after the first sundial was constructed, the Egyptians invented the water clock. The artisans were dismayed to discover that their new water clocks did not tell the same time as the sundials. They assumed that the sundials, which had been used for centuries, were correct and spent considerable time and treasure in frustrating attempts to construct a water clock precisely as inaccurate as a sundial. It was not until the fourteenth century that mechanical clocks were constructed that produced accurate measures of the passing hours. Town clocks in that era had no hands or dials, as the populace was illiterate, but did ring bells to mark the passing hours.[49] 

When we encounter a new situation, we assess it against some yardstick of our experience. We are now in the midst of a huge technological and economic revolution. Yet we are so accustomed to using the standards of economic and social measurement developed for the industrial age that we seldom stop to consider that the old measures of economic progress and decay, success and failure, are rapidly losing their usefulness. Much of the economic hysteria that has become a constant background to discussions of government policy or business strategy is traceable to the increasing inaccuracy or irrelevance of our standards of economic measurement. The declining usefulness of these standards seems to be one reason so many very good economists lately have been so wrong about the direction of the economy. For the last seven or eight years of the decade of the 1980s, the standard blue-chip economic forecast went like this: "We are surprised how strong this quarter is, but we expect the next quarter to be weaker, and the recession to occur four or five quarters out." Words like these resound from innumerable podiums over that time period. The problem with this standard forecast was that


for a very long time it was wrong, although like a stopped clock that is right twice a day, it momentarily described reality. What these failed forecasts demonstrate is that even very good people will make bad calls when they must use bad information.

Flying by faulty instruments is dangerous. The old instruments may convince us we have failed where we are succeeding or persuade us to turn about in vain pursuit of our past rather than successfully navigating the future. If we are to cope successfully with the information economy, we shall have to develop a new methodology to measure economic success and failure.

In some cases this may mean merely updating and revising measurements that have served us well for many years. But in other cases we must be prepared to give up forever the statistical surety of the old numbers in favor of less quantifiable indicators. America's balance of trade figures, which are much lamented, today conceal more than they reveal. Murray Weidenbaum has written:

Two basic statistical indicators make the point: The first is that one-half of all imports and exports are transacted between companies and their foreign affiliates or parents. From the viewpoint of political geography, these are international transactions. But from an economic and technological viewpoint, the flow of goods and services are internal transfers within the same enterprise. A second way of looking at the global market is to consider that one-half of the products manufactured in the U.S. have one or more foreign components.[50] 

(Our current accounting conventions fail to take these new realities into account.)

Knowledge, the fundamental capital stock of the information economy, is far more difficult to quantify than the material wealth and real assets that previously dominated economic thought. As it becomes clear that today really is


different from yesterday, nations may be forced not only to change the way they measure their economies but also to modify their ambitions to regulate and control them. If it turns out that the economy of the future really is fundamentally more difficult to measure than the economy of the past, governments may have to relinquish many of the powers of economic planning and control they have acquired over the past several hundred years.

George Stigler, the Nobel laureate who has done such brilliant work on the consequence of economic policies, put the problem this way:

The first and the purest demand of society is for scientific knowledge, knowledge of the consequences of economic actions...Whether one is a conservative or a radical, a protectionist or a free trader, a cosmopolitan or a nationalist, a churchman or a heathen, it is useful to know the causes and consequences of economic phenomena...Such scientific information is value-free in the strictest sense; no matter what one seeks, he will achieve it more efficiently the better his knowledge of the relationship between action and consequences.[51] 

In order to get that information, we must measure things impartially. This is easier said than done. Einstein's theory could be proved by using the photographs of a solar eclipse. The wealth of nations is more elusive. Not long ago in historical terms, land and wealth were seen to be one and the same. So were other natural resources. Then, as the Industrial Revolution remade Western society, economists gradually accepted manufacturing as a creator of wealth.

In the 1980s, as the industrial age began to fade into the information society, the same arguments took place but with different protagonists. Making "things" in a factory, not punching computer keys, created wealth, we were told. The measurements of wealth and progress we have become accustomed to in the industrial age may be no more relevant


to the information society than the of William the Conqueror, which recorded ownership of parcels of land, was central to wealth creation in the industrial age.

In modern times, one of the principal sets of measures published by our government is the National Income and Products Accounts, which yield, among other things, the official estimate of the gross national product (GNP). These statistical measures were constructed during the Great Depression when our GNP was about $56 billion, the economy was dominated by traditional heavy industry, and national exports at, $500 million, accounted for less than 1 percent of the GNP. By any reckoning measurement of GNP is an immensely difficult task, and one can only admire the skill of the people who constructed our national accounting system. But given the dramatic changes in the economies of the United States and the world, too great a reliance on sixty-year-old national income accounts puts us in real danger of mismeasuring the economy. Since fiscal 1969, the U.S. government has used a unified cash-based budget that does not produce results congruent with generally accepted accounting principles. One of the principal aberrations from good accounting principles is that there are no capital accounts. Everything the government buys is "expensed" -- a several-billion-dollar road system, Yellowstone National Park, or a ten-cent pencil. It would be hard to find a serious accountant who would endorse this bookkeeping system. Its employment in the private sector might force prosecution for fraud.

Today's method of calculating GNP not only fails -- except indirectly -- to capture the benefits of rapidly accumulating knowledge, but it is also marred by inconsistencies. For example, income is imputed by formula to the owners of homes that they occupy, but there are no imputations for streams of income that flow from the use of autos, dishwashers, and other consumer durables. In times of high taxation, these durables -- which are arguably capital invest-


ments -- often provide shelter from the ravages of inflation. Because of these and other difficulties it becomes increasingly arduous to measure recent GNP achievements and much more formidable to make projections into the future.

Government is incapable even of telling us with any precision what the last quarter's GNP growth was. Final figures are not issued until three years after the close of a quarter. The difference between the Commerce Department's first reports on the GNP for a quarter and the final figures show huge variations. If, for example, the initial report indicated a GNP growth of 3 percent, virtually a full 50 percent of the time the final figures show statistical growth of either less than 1.5 percent or more than 4.5 percent. One time in ten the adjusted final figure would be recorded as less than 0.5 percent of more than 5.5 percent. Figures that vary that much rarely furnish a firm foundation for policy decisions. The record of looking ahead for most forecasters, public or private, is even worse.

Yet the GNP and other national economic measurements play a critical role in the formulation of economic policy. The federal deficit is a case in point. Federal fiscal policy depends on accurate forecasts of the deficit, which in turn depend very heavily on GNP projections. Yet the numbers produced by the Congressional Budget Office (CBO) are often substantially at variance with those produced by the Office of Management and Budget (OMB), an agency of the executive branch. For the fiscal year 1991, the CBO estimated the federal deficit at $138 billion, while the OMB projected a deficit of $63.1 billion -- a difference of $74.9 billion. Political agendas obviously intrude on these supposedly objective measurements. Doom and gloom clashes with the rosy scenario, and it becomes ever more difficult to tell who is right.

Federal Reserve monetary policy is heavily dependent on comparing GNP estimates with estimates of the nation's productive capacity. What rightly concerns Fed policymakers is how fast the real economy can grow over the long haul with-


out inflation. What is the real potential growth rate for real GNP? The Fed, as nearly as one can infer from its public statements, assumes GNP has the potential to grow at around 3 percent before running into the physical limitations imposed by capacity. The Fed may act to slow growth if the economy grows at the persistently higher rate in order to slow inflation. On the other hand, some of the Fed's supply-side critics insist that potential GNP growth is as high as 5 percent. The issue is anything but trivial. The difference between a 3 and 5 percent potential could mean a huge difference in the level of real GNP over a ten-year span. Clearly reliable assessments of potential growth are essential, and deciding the reliability of these measures in turn rests in part on whether or not current measures of industrial capacity remain relevant in the information age.

Some argue that when industrial production reaches approximately 85 percent of capacity, the economy approaches the physical limits of its output, raising the possibility that further growth will be inflationary. In today's economy this traditional rule of thumb may be outmoded, since industrial production employs only about 20 percent of American labor, with the balance working in the nonindustrial sectors of our society, where there is a huge potential for expansion.

Despite the decline in the percentage of non farm workers employed in manufacturing -- from 31 percent to 19 percent today -- manufacturing as a share of the GNP has remained remarkably stable throughout the postwar period. We have seen in our factories the same phenomenon that so dramatically changed American farms. Fewer and fewer people are producing more and more goods. It is estimated that in 1810, 80 percent of the labor force was employed in agriculture; by 1910 it had fallen to about 30 percent; and today is roughly 3 percent -- and yet we can and do feed the world.[52] 

This relatively steady output, in the face of a massive exodus of workers from industry, raises the question of whether the utilization figures on percentage of industrial capacity


mean the same thing for inflation as they once did. Indeed, this measure of capacity utilization played a key role in leading some forecasters to overestimate inflation during the 1982-87 economic expansion. Another reason the capacity utilization index misleads unwary economists is that it covers only manufacturing, mining, and utilities, activities that account for a shrinking share of U.S. output.

The standard industrial codes that once told how industry is organized are now out-of-date. Of the twelve major code divisions, only two reflect the service industry, although about 80 percent of Americans work in a service business. Accurate numbers are available on the number of brakemen on American railroads but not on the number of computer programmers. This is but one example of why today's economy cannot be fitted into yesterday's standards. If basic macroeconomic measurements, such as GNP and productive capacity, do not mean what they once did, the question then becomes: Can we construct new, more reliable measures of the kind of economy we now have?

We can with the power of modern computers. Like any change, a new way to measure our economy will be resisted. Charges will be leveled that the books are being cooked. When I entered the banking business, earnings were reported before allowance for loan losses. Citibank started to publish its reserve figure -- an innovation at the time -- and then instituted the concept of reporting earnings after allowing for loan losses. At the time, both initiatives were roundly condemned by our fellow bankers, although today both are standard. Just as each line in the federal budget has a political constituency, so also do various political and business groups have a stake in how our GNP is measured. The problem of changing yardsticks will always be more political than technical.

Some of our trading partners, however, are already moving in this direction. Japan proposed in January 1989 that the Office of the Economic Cooperation and Development (OECD) change the way it measures economic performance.

Less reliance should be placed on the traditional measures, such as trade and budget figures, and more on spending on research and development, the extent of overseas investment, the ratio of high-tech industries to service companies, changes in industrial structure and labor mobility, the productivity of labor and capital, and the contribution of newly developed businesses.[53]  Yet even a corrected set of traditional measuring sticks for the national economy might not be as relevant as they once were precisely because they are strictly national in scope. Once that was appropriate. Today, however, the global marketplace has moved from rhetoric to reality. National economies are no longer islands but, rather, an integral part of a larger global market.

In practice this fact of life is often overlooked. In 1972, for example, when U.S. imports as a percentage of the GNP were only about one-half as large as they are today, many forecasters underestimated the sharp increase in inflation that would follow the devaluation of the dollar that year. Other nations whose livelihood has depended on trade for years were not surprised.

The Netherlands, with a population about the size of New York State, although it maintains its own GNP accounts, knows full well that any sensible analysis of that nation's economy must proceed by looking at the rest of Europe as well. What is obvious about the Netherlands is true even of the United States. It makes little sense today to use the GNP of the United States as presently computed.

For instance, for much of the 1980s economists predicted that the federal deficit would absorb so much of our domestic savings as to "crowd out" private investment. The crowding-out theory was never validated -- the 1980s saw a powerful increase in U.S. business investment -- because the theory ignored the reality of the global market. The proponents of the theory added up all the capital instruments sold on Wall Street in a year and then took the amount of federal debt sold and computed a ratio that purported to say that the federal


government absorbed some significant percentage of all capital raised. That ratio may once have been useful, but today Wall Street, while still integral, is just one part of the global market. If foreigners choose to give up their currency to buy dollars to invest in America, it is not an act of charity but a hardheaded decision that they can do better here than at home. While still huge, American capital markets are only one option for raising money. It is a matter of complete indifference to the chief financial officer of any major company whether one sells capital notes in New York, Hong Kong, or London. Decisions are made on the basis of rate and availability, not geography.

These tight linkages make the growth rates of our major trading partners ever more essential to U.S. prosperity. Policies aimed at giving the GNP of the United States a short-term boost while ignoring possible global impacts are even less well advised today than they were sixty years ago, when a binge of protectionism helped bring on the Great Depression.

As the reality of the global market sinks in, policymakers from different nations will come to understand that even the strongest sovereign cannot entirely control his own destiny but will increasingly be forced to cooperate on economic issues it once regarded as almost exclusively national concerns. This in itself will force them to reexamine the ways it measures its national economies and may well spark a vigorous international effort to assemble more meaningful data on the world economy. The suggestions of MITI to the OECD, referred to earlier, is a first step in this direction.

The global economy may also prompt some increase in international economic regulation and even some more forceful attempts at international economic planning and manipulation. However, as we shall see in greater detail in later chapters, the barriers to procrustean government regulation in the information age are substantial. Moreover, however intensely nations cooperate in search of better data, they may


never be able to measure certain economic phenomena with the apparent assurance with which we once measured the industrial economy.

In recent years we have witnessed furious, often partisan debates, about two leading factors in America's struggle to remain economically competitive: capital formation and productivity. Of both it has been said with great confidence that they were lagging and soaring, that they represented the light at the end of the tunnel, and that the light at the end of the tunnel was an oncoming train. The truth is the information economy has made both far more difficult to measure. Assets recorded on today's balance sheets tend to be things we can feel and touch. On the accountants' ledgers the intellectual capital a company acquires tends to be treated as an expense, not as a real asset; it is not carried on the capital accounts along with the shiny new company car or the aging brick factory building, though neither contributes as much to the enterprise's productive capacity.

The magnitude of the distortion is suggested by the fact that the world software market in 1989 was estimated to be between $50 and $60 billion and growing at about 15 percent a year.[54]  As far as the accountants and economic statisticians are concerned, this $50 or $60 billion has almost disappeared into thin air. Companies expense most of the software when they buy it, and it appears in total on nobody's balance sheet as an asset that is in fact used on a daily basis.

If the software sold by IBM and thousands of other software producers suddenly disappeared, factories would stop running, accounting and payroll systems would cease to function, all the telephone switches would freeze, airlines would stop flying, and the economy would halt. It is hard to imagine such a vital business asset being virtually unrecorded anywhere, but that is the case.

If capital is what produces a stream of income -- and that is a definition no one seems to quarrel with -- then it follows that software is a form of capital. It has always been difficult


to measure any form of knowledge capital, but in the past the problem was not as urgent, since the ratio of difficult-to-quantify knowledge capital to more tangible capital was not as high or growing so rapidly as it is today.

This development throws a different light on the problem of capital formation. To enter a business, the entrepreneur in the information age often needs access to knowledge more than he or she needs large sums of money. To write a software program that might make its author millions of dollars may require only a relatively trivial investment (enough to purchase a personal computer or at least rent time on a mainframe) compared to the investment needed to enter, say, a manufacturing business producing a comparable stream of income. It is the knowledge capital accumulated in the software writer's head or in the documentation or on disks that makes possible the new program. This capital is substantial and very real. And it does not show up with any clarity in the numbers economists customarily quote about capital formation.

The trends that are making intellectual capital an increasing proportion of national wealth are accelerating. At least 80 percent of all the scientists who ever lived are now alive. In our own country at least half of all scientific research done since the United States was founded has been conducted in the last decade. With the total stock of our knowledge doubling about every ten or twelve years, it is clear that our intellectual capital is being formed far more rapidly than tangible capital.

Even the numbers we use to describe tangible capital investment are sometimes misleading. The figures may show that we are "disinvesting" when what we are really doing is paying less money for much more capacity. We see this in the ratio of price to capacity in the hand-held calculator or the watches on our wrists or the personal computers on our desks. They cost less than they did a few years ago. But they do more and by any reasonable standard represent an increase of capital. The intellectual value-added in a microchip far


outweighs any cost of labor and materials. Experience shows us that the information economy drives down manufacturing costs (as compared to capacity, not units) at a pace that seems far faster than typical during the industrial era for the simple reason that as information products are refined, they rapidly gain capacity without increasing in size, cost of materials, or labor. The entire Industrial Revolution, says Dr. Carver Mead of the California Institute of Technology, "enhanced productivity by a factor of about a hundred." But "the microelectronic revolution has already enhanced productivity in information-based technology by a factor of more than one million -- and the end isn't in sight yet."

In doing their capital accounts, accountants have traditionally equated cost and value. This was a sensible procedure in the past, when the intellectual value-added in most products constituted a relatively modest proportion of the cost of labor, materials, and machinery, and the prices of products therefore fell at he relatively slow pace allowed by the industrial learning curve.

Imagine what this truth would mean for automobile manufacturers if, over the course of a few decades, without increasing the price or size of a six-passenger car, they could figure out how to make it hold 600 hundred people, travel safely at 5,500 miles per hour, and get 2,600 miles to the gallon! That is roughly what has happened in the computer industry.

Nor are these considerations limited to stand-alone calculators and computers. As we saw in chapters 2 and 3, the microchip and other information technologies are everywhere. The usefulness of traditional capital accounting is being undermined by the spread of information into nearly all the "hard" products of our age.

At Citicorp, I encountered a perfect example of how the current vocabulary of economics and business describes a world that still exists in part but fails to capture the essential dynamics of this new world. My Citicorp colleague John Reed invented for us the term "investment spending," a concept


it took us some time to understand as it seemed, at first glance, to be a contradiction in terms. But it was, and is, appropriate to our times. Simply put, in an information-based economy much of what we now consider expenditure -- staff, software, or marketing programs, for example -- is actually capital investment: It produces a high return and is self-financing.

Almost every day brilliant young scientists and engineers are hired by business enterprises for a fraction of what it cost American universities to produce them. Of course, even toting up the true dollar cost of their educations would fail to measure the contribution of the uncounted intellectual capital (retained intellectual earnings, as it were) accumulated by the universities over the years.

In early 1990, Intel Corp., one the most important commercial enterprises of the information age, announced yet another of its stunning breakthroughs in information technology. Intel made fundamental advances in "data compression" technology that allows huge amounts of data -- words, numbers, or pictures -- to be transported from place to place in a fraction of the time currently required. Two-hour movies can be sent to your home in minutes; masses of data can be dispatched without tying up expensive circuits. The new technology will create enormous value for any enterprise, from the movies to modern medicine, that depends on real-time management of very large batches of complex data.

This new development will produce a stream of income, though river or tidal wave might be more accurate. Yet what economist would volunteer for the job of quantifying the intellectual investments responsible or figuring the return on investment? To be sure, Intel has been a research-based company since its inception and could show hefty expenditures in that regard. But the essential base of knowledge capital could not be contained or counted within the walls of Intel or even inside the borders of Silicon Valley. What is the knowledge capital base at Cal Tech or MIT or at the other univers-


ities that helped make Intel what it is today? How much capital did they form last year? What income will that capital produce? When will its effects be measured in the economy? Impossible-to-trace intellectual investments add more value to the economy almost overnight than years of carefully retained money earnings and cautious expansion in physical-plant improvements.

As the percentage of "knowledge workers" to manual workers increases, the difficulty of measuring productivity grows proportionately. The debate over the status of American productivity has been much in the news. How does America stack up in the global marketplace? Is the growth of American productivity greater or less than that of other nations? These are important questions, but once again what do the words mean? Productivity, in the simplest terms, used to mean output per man-hour. While that was a useful concept in manufacturing, do we really have any meaningful measure of productivity for this information-intensive age when the vast majority of our workers are employed in the knowledge or service sectors? Current methodology, although quite sophisticated, fails to supply really meaningful numbers in many instances.

The huge and growing financial service industry is one example of the difficulty of measuring productivity. Once we get past counting the number of checks cleared per hour or the number of insurance claims paid -- all of which display greatly improved productivity, thanks to the computer -- we then move immediately into the realm of the subjective. Is a loan officer's productivity in a bank, insurance company, or a credit company to be judged on the number of loans made per day? The size of the loans? The number of loans that are repaid on time? The quantity of bad debts created? How do you measure the productivity of workers who make such critical judgments? No one really knows, though many have tried.

The challenge of measuring productivity is spreading to


the industrial sector as information supplements, and in some cases replaces, physical capital. As Shoshana Zuboff and others have pointed out, management's usual first impulse has been to assess the productivity of factory automation almost exclusively by job reductions. But as her ambitious study demonstrated, even in enterprises in which automation was well handled, job reductions often fell short of expectations. The remaining workers, however, began to make new contributions to customer needs, including more reliable quality and faster and more conscientious service. Such improvements would not necessarily show on the books in a timely or easily quantifiable manner, though making great medium- and long-term contributions to the enterprise.

As Zuboff also points out, in a well-automated environment (and badly automated ones still seem to predominate in this new world) workers are enabled to more fully comprehend the productive process and take on more responsibility for it. That presents a challenge to middle managers, who rightly feel that their traditional roles are being taken over in part by automated scheduling and task-assignment systems and in part by self-policing workers themselves.

The best managers look for new ways to add value to the enterprise, and the others eventually follow along. Yet if these managers do begin to add value in thoroughly unexpected ways, how accurately will this phenomenon be represented by productivity statistics? Will the raw numbers reveal the history of productivity growth or obscure its true sources? And how many speeches will politicians have made in the meantime about declines in productivity because they were looking at old numbers, not new realities?

As information applied to work adds value to every aspect of economic activity, the problem of assessing productivity spreads throughout the economy. As long as we are unable reliably to quantify the productivity of knowledge workers or information technology, statistical alarms or number-


crunching boasts about American productivity will have little credibility.

It will take a long time to construct a new measuring system for our global economy. When we achieve this goal, many will dismiss the results because they will be different from our current system in the great tradition of those who dismissed the accuracy of water clocks that did not agree with sundials. Nevertheless, until we do construct a new system, we may never again have such comfortably reliable statistical measures of the productivity of people or investment as we had in the past. The firm statistical measures of the industrial era may have been an artifact of their time. So how will we judge what course to take? Deprived of truly relevant numbers, we may have to substitute judgment plus a healthy dose of modesty, a combination sometimes called common sense.

For governments, the difficulty of quantifying intellectual capital or productivity will mean that they, too, will have to fall back on common sense, including an extra-large dose of modesty. Grand dreams of planned economies hail from an era in which government economists, like the accountant who opposed the Brooklyn Bridge because there was plenty of room on the ferry, used to be pretty sure what made economies work. They were usually wrong, but they were sure that view of reality, derived largely from statistical pictures of the economy, was correct.

As we move into the information economy, that certainty will erode, at least for a time. Our statistical portraits may have to owe more to the impressionist school than to academic realism. Governments that value prosperity will have to give up their dreams of economic fine-tuning. You can not fine-tune (if you ever could) what you can not measure. Not all or even most political leaders will want to admit this. But these difficulties will be one more force arrayed against government manipulation of the economy, and it is only reasonable to expect them to have some effect. It will become


essential for governments to recapture the wisdom of Socrates: to know that they do not know.

For governments, common sense in pursuit of prosperity may mean less fiddling with the details of economic output and more hard work on input, particularly of the human variety. The quality of education may be the most important way government can address productivity. Peter Drucker has called information the "primary material" of the new economy. If Marx were alive today, he might fairly call education the means of production.

If we are to compete in a global marketplace, we must constantly build and renew our intellectual capital. We have little or no control over the natural resources within our borders, but we do have control over our educational and cultural environment.

Our success in achieving outstanding agricultural productivity was not unrelated to our educational structure. When Abraham Lincoln signed the Morrill Act in 1862, the first land-grant colleges were formed offering courses in agriculture, engineering, and home economics. Some years later, in 1887, the Hatch Act expanded the program with federal funds for research. The county agent was the conduit of the new technology from campus to farm. And the United States, in large part as a result of such efforts, gave birth to the green revolution.

Soshana Zuboff observes that a key factor in determining whether workers succeed in realizing the full potential of information systems is their ability to master the technology and use it to create unexpected value rather than passively serving it. But this mastery requires workers to operate on levels of abstract and symbolic thought that may never before have been required in their jobs. It can be done, even by workers who never expected to evolve out of the blue collar tradition. These new "intellective" skills, as Zuboff calls them, can be learned. But poorly educated workers, with minds untrained in and unchallenged by the abstract skills


of math and science, who lack the confidence to learn the new system behind the new technology, are much less likely to meet the challenge.

Although much has been written about the decline of American competitiveness, in many ways this new global market plays to our strengths. The constant in the global marketplace is change, and change is what we Americans deal with best. We have always been innovators. Who else would choose as a national motto on our great seal "" -- the new order of the ages. This native adaptability is in itself a kind of "infrastructural" advantage, an infrastructure of culture that will serve us well as long as we refuse to panic in the face of statisticians and pundits wielding yesterday's numbers and telling us we're washed up if we remain ourselves.

In a time of often confusing transition, our goal must be to make common sense of the order of the day. We must tell the politicians and pundits to stop flogging us with increasingly meaningless numbers. The governments of the world must drop the pretense of being able to outguess a world market that was always too complex to accommodate the pretenses of economic planners and which now less than ever can fit into any central plan or "industrial policy." But we can help our economic position by doing what we know is right: nourishing the growth of intellectual capital and shunning superstitious reverence for materialist totems of a bygone era.


[48] Fernand Braudel, The Structure of Everyday Life (New York: Harper & Row, 1981), p. 34.

[49] For a complete history of the measurement of time see: Daniel Boorstin, The Discoverers (New York, Random House, 1983).

[50] Murray Weidenbaum, "Geopolitics and Geoeconomics: Systems Out of Sync," Directors and Boards, 14: no. 4 (September 1990).

[51] George Stigler, The Economist as Preacher (Chicago: University of Chicago Press, 1990), p. 6.

[52] For complete figures see Economic Report of the President(Washington, D.C.: U.S. Government Printing Office, 1988).

[53] The Financial Times, London, 9 January 1989.

[54] Figures supplied by Frank Metz, chief financial officer of IBM.