Military history

Chapter 32

The Rise of Economics

The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.

—John Maynard Keynes

Economics came to acquire an almost hegemonic position in strategic management. This was not because it was uniquely fitted for this intellectual purpose but because of deliberate decisions to adopt it as the foundation of a new science of decision-making and the active promotion of this new science by bodies such as the RAND Corporation and the Ford Foundation, both of which encouraged its embrace by business schools. As with Plato’s philosophy, a new discipline that offered eternal truths was created in part by disparaging and caricaturing what had gone before for its lack of rigor.

The best place to start this story is with the RAND Corporation, which we identified in the last section as the home of game theory and the belief that a formal science of decision could be developed. This effort gained credibility because of the very special issues posed by nuclear weapons. The effort transformed thinking about not only strategy but also economics because it demonstrated the possibilities opened up by powerful computing capabilities for modeling all forms of human activity. Philip Mirowski has written of the “Cyborg sciences,” which developed along with computing, reflecting novel interactions between men and machines. They broke down the distinctions between nature and society, as models of one began to resemble the other, and between “reality” and simulacra. The Monte Carlo simulations adopted during the wartime atomic bomb project for dealing with uncertainty in data, for example, opened up a range of possible experiments to explore the logic of complex systems, discerning ways through uncertainty and forms of order in chaos.1 RAND analysts saw them as supplanting rather than supplementing traditional patterns of thought. Simple forms of cause and effect could be left behind as it became possible to explore the character of dynamic systems, with the constantly changing interaction between components parts. The models of systems, more or less orderly and stable, that had started to become fashionable before the war could take on new meanings. And even in areas where intense computation was not required there was a growing comfort in scientific circles, both natural and social, with models that were formal and abstract, not based just on direct observations of a narrow segment of accessible reality but also on explorations of something that approximated to a much larger and otherwise inaccessible reality. They could be analyzed in ways which the human mind, left on its own, could not begin to manage. As one of the first textbooks on operations research noted, this work required an “impersonal curiosity concerning new subjects,” rejection of “unsupported statements,” and a desire to rest “decisions on some quantitative basis, even if the basis is only a rough estimate.”

In their landmark book of 1957, which gave the field renewed vigor, Duncan Luce and Howard Raiffa noted prematurely the decline of the “naive bandwagon-feeling that game theory solved innumerable problems of sociology and economics, or at the least, that it made their solution a practical matter of a few years’ work.”2 They urged social scientists to recognize that game theory was not descriptive. Instead it was “rather (conditionally) normative. It states neither how people do behave nor how they should behave in an absolute sense, but how they should behave if they wish to achieve certain ends.”3 Their injunction was ignored and game theory came to be adopted as more of a descriptive than normative tool.

One reason for this was the development of the Nash equilibrium, named after the mathematician John Nash (whose struggle with mental illness became the subject of a book and a movie).4 This was an approach to nonzero-sum games. The idea was to find a point of equilibrium, comparable to those in physics when forces balance one another. In this case, players sought the optimum way to reach their goals. The equilibrium point was reached when the players adopted a set of strategies that created no incentive for any individual player to change strategy so long as the others stayed unchanged.5 Nash’s contribution came to be celebrated within economics as “one of the outstanding intellectual advances of the twentieth century.”6 But its value to strategy was limited. On the one hand, a lack of points of equilibrium led to chaos; on the other, too many points resulted in an indeterminate situation. As a contrast, Tom Schelling demonstrated the possibilities of using abstract forms of reasoning to illuminate real issues faced by states, organizations, and individuals. He encouraged people to think of strategy as an aid to bargaining, and he explored with great insight the awful paradoxes of the nuclear age. But he explicitly eschewed mathematical solutions and drew on a range of disciplines, thus abandoning any attempt to develop a pure, general theory. Mirowski found Nash’s non-cooperative rationalism wanting but also found Schelling’s more playful, allusive mode of analysis exasperating because of its lack of rigor. Schelling avoided the restrictive forms of game theory and the challenging mathematics of Nash in order to make paradoxical points about communication without communication and rationality without rationality.7 Mirowski understated Schelling’s importance as a conceptualizer and his recognition of the limits of formal theories when it came to modeling behavior and expectations. “One cannot, without empirical evidence,” Schelling observed, “deduce whatever understandings can be perceived in a non-zero-sum game of maneuver any more than one can prove, by purely formal deduction, that a particular joke is bound to be funny.”8 Schelling, however, had many more admirers than imitators. In economics Nash became part of the mainstream.

The extraordinary boost from RAND’s budget and advances in computing put social science on a new footing. The effect was particularly striking with economics. Orthodox economics had faced a crisis during the great depression of the 1930s. This led to greater empirical rigor backed by improved statistical analysis. Many key figures had learned the analytical techniques in wartime operational research. Even where there were important differences in emphasis and approach, as for example between the Chicago School and the Cowles Commission (which had been set up in 1932 to improve the collection and statistical analysis of economic data), they had much in common. Notably, they were rooted in the neoclassical tradition, going back to Walras and Pareto, and assumed that the safest assumption was of individual rationality. As Milton Friedman, the most prominent Chicago economist, put it: “We shall suppose that the individual in making these decisions acts as if he were pursuing and attempting to maximize a single end.”9 Friedman considered the debate about whether people really acted so rationally, following complex statistical rules, irrelevant. It was an approximation that was productive for theory, leading to propositions that could then be tested against the evidence.

Friedman and his colleagues were methodologically pragmatic, although dogmatic in their conviction that the market worked best when left alone by government. In this they were influenced by Friedrich Hayek, an Austrian who had acquired British citizenship in 1938 and had been teaching at the London School of Economics until he was recruited to Chicago, though not by the economics department, in 1950. His most famous book, The Road to Serfdom, was published during the war and warned against the inclination to central planning that was gathering momentum under the combined influence of socialism and the wartime experience. Meanwhile, the Cowles Commission, influenced by John von Neumann and sponsored by RAND, was up for new methodological challenges and was more inclined to believe that robust models could support enlightened policy. Either way the assumptions and methods associated with game theory became part of a wider project to develop new forms of social science.

Economics into Business

The Ford Foundation was at the fore in exploring how management within big government and big business could become vital instruments of efficiency and progress. In the late 1940s, the Foundation moved from addressing the needs of the Ford Company’s own operations around Detroit to meeting a broader agenda. The deaths of both Henry and Edsel Ford led to a surge of money into the Foundation. The man chosen to head a study committee to set the objectives for the future was H. Rowan Gaither, then chairman of RAND and later to become the Foundation’s president. He was convinced that social science could and should be mobilized to serve the nation, and that this required managers who understood this science and could appreciate the possibilities for its application. He spoke to the Stanford Business School in 1958 about how “the Soviet challenge requires that we seek out and utilize the best intelligence of American management—and in turn put on management a national responsibility of unparalleled dimensions.”10

A report for the Foundation in 1959 deplored an “embarrassingly low” standard of acceptability among business schools, one which many schools did not actually meet. The point was illustrated by citing multiple study options on the “principles of baking” at one southern school. At the same time there was optimism that the situation could be rectified by a “management science” being transmitted to students as a methodology for decision-making. Instead of being taught to rely on judgment (which had been the basis of the Harvard curriculum), students could develop a more analytical competence by being immersed in quantitative methods and decision theory. Under Gaither’s influence, Ford directed vast sums into the top business schools to create centers of excellence, raising the intellectual caliber and professionalism of the coming generations of managers and their teachers. Over two decades, the numbers of business schools in the United States tripled and the production of MBAs went up accordingly. By 1980, fifty-seven thousand MBAs were graduating from six hundred programs, accounting for 20 percent of the total number of master’s degrees granted. At the same time, there was an equivalent expansion in the number of scholarly academic business journals, from about twenty at the end of the 1950s to two hundred two decades later.11

Harvard was the major beneficiary and the Hawthorne studies held as exemplars of the benefits of serious research, though it was the new Carnegie Institute of Technology’s Graduate School of Industrial Organization that led the way in drawing upon the social sciences as a source of intellectual energy. Lee Bach, who led the Carnegie effort, was convinced that the best decisions must emerge out of the best reasoning process. He predicted a change that would involve clarifying and bringing to the surface “the variables and logical models our minds must be using now in decision-making and of persistently improving the logic of these models.”12 One of those he recruited, political-scientist-cum-economist Herbert Simon, recalled a determination to transform business education from a “wasteland of vocationalism” into a “science-based professionalism.” By 1965, Ford was reporting “an increased use of quantitative analysis and model building” and more publications in disciplinary journals in economics, psychology, and statistics.

Its original concept had been to integrate the case study method as taught at Harvard with economics, sharpening the case studies while tempering economic theory with a dose of realism. The balance was to be shifted to more research with less description, more theory and less practice. Little balance was found. In what was later admitted to be a “tactical error,” Ford’s push for academic excellence in the business schools came to be dominated by economists who showed little interest either in adapting to other disciplines or even worrying unduly about real-world applications. In the early 1960s, however, they seemed like a breath of fresh air. The determination to stress the practical and avoid the theoretical had led to an absence of any sort of theory, which left everything to common sense and judgment. In remedying this deficiency, economics had clear advantages over the other, softer social sciences. It encouraged parsimonious models, simplifying the complex issues of management by focusing on core principles and assuming rational actors (which is just how managers liked to imagine themselves). The clarity of the assumptions would be reflected in the sharpness and testability of the hypotheses. The challenge for management was to achieve the best for their organization. It made sense to look at a theory that assumed that to be the aim of all individuals and organizations.

The change was reflected in Harvard. The business policy course, which treated corporate strategy in the “genteel tradition of those days, not as a set of formulas but as the mission of the company, its distinctive competence, reflecting the values of its managers,” and was not particularly popular, was replaced by one entitled “Competition and Strategy,” from which the material on the general manager and the values of society had been removed.13


It was not just the push on the supply side that created the interest in economic theories of decision-making but also changes in the demands posed by the business environment. The emphasis on planning processes had reflected the supposed interests of a limited number of very large corporations with huge financial and political clout, offering a range of product lines in a steadily growing economy. While for these behemoths internal organization was a major issue, precisely because of their size and strength and the restraint of antitrust legislation, competition was not so important. The word does not even appear in the index of Chandler’s Strategy and Structure or Drucker’s The Practice of Management.

For smaller firms in new or dying markets with much simpler structures, the challenges were always quite different, and new challenges began to develop even for the big corporations. The large as well as the small became subject to increasing foreign competition, notably from insurgent Japanese corporations with a better eye for new consumer technologies and lower costs. Basic structural shifts were occurring: the move from manufacturing to services, new technologies that were creating new forms of enterprise as well as new types of goods, plus the development of increasingly esoteric financial instruments. Then there were temporary factors with severe effects, such as the hike in oil prices in 1974 and the subsequent combination of stagnation and inflation.

In this first instance, this challenge was picked up not by the business schools but by consultants, who by necessity were tuned to the stresses and strains of changes in the business environment. The Boston Consulting Group (BCG), founded by Bruce Henderson in 1964, saw strategy as being about making direct comparisons with competitors, especially in relation to cost structures. While the business schools still encouraged the analysis of specific and unique situations, Henderson sought strong theories that would guide the consultant when considering the circumstances of new clients. His approach was more deductive than inductive. The aim was to find a “meaningful, quantitative relationship” between a company and its chosen markets.14

Like so many figures in business strategy, Henderson’s background was in engineering. He was therefore attracted by the idea of systems tending to equilibrium, with the aim of strategy in a system including competitors to be one of first upsetting the equilibrium and then reestablishing it on a more favorable basis. The challenge was to develop the necessary thinking in terms sufficiently explicit to be “executed in a coordinated fashion in complex organizations.”

His approach, in stark contrast to the complexity of Ansoff, was to apply micro-economic methodology, to develop what he called “powerful oversimplifications,” which BCG then sold to companies.15 The oversimplification that established his reputation was the “experience curve.” Based on early studies of the aircraft industry, the core idea was that the more units produced, the lower the costs and the higher the profits. When plotted on a curve this could show the state of a competitive relationship. The presumption was that for companies making the same product, variations in costs were largely related to market share. Thus the effects of an increased share were calculable. Businesses should expect costs to decline systematically and predictably as a result of their superior productive experience. While the methodology encouraged companies to look at their total costs and recognize economies of scale, it could also be seriously misleading. In a mature industry the experience curve would flatten out. It could also encourage a race to the bottom, as prices were cut in the expectation of higher volumes which might not materialize, and then leave little scope for investment. As Ford’s Model T experience demonstrated, even the master of a product with costs kept down to a minimum can still be caught out by a better product.

BCG’s second powerful oversimplification was the growth-share matrix. A matrix was drawn with the growth in the market on one axis and share of the market on the other. Companies could then locate their various activities on the matrix. It was best to have a high share of a growing market (the stars) and worst to have a low share of a static or declining market (the dogs). The other two categories were “cash cows” and “question marks.” The images were powerful and the logic compelling. The cows had to be looked after and the stars backed, while the dogs were candidates for divestment. Once that was sorted, only the question marks required serious thoughts. Again the imagery had a capacity to mislead. As one critic, John Seeger, noted, “The dogs may be friendly, the cows may need a bull now and then to remain productive, and the stars may have burned themselves out.” Seeger warned of the dangers of allowing management models to “substitute for analysis and common sense.” Just because a theory had elegance and simplicity did not “guarantee sanity in its use.”16

It took until 1980 before a major breakthrough in business strategy came out of a business school. Michael Porter, who had the requisite engineering background and an enthusiasm for competitive sports, entered the Harvard MBA program, where he was taught the holistic, multidimensional “business policy” philosophy. Unusually, he then enrolled for a Ph.D. in business economics. One of the courses he took was on industrial organization. This was the area of economics most conducive to business strategy because it studied situations of imperfect competition. In perfect competition, the postulate through which economic theory largely developed, the choices available to buyers and sellers created the potential for equilibrium around a specific price. By definition, perfect competition allowed no scope for an individual unit to have a special and successful strategy. The most imperfect competition would be a complete monopoly where a single supplier could set the price, also leaving little scope for strategy. The oligopolist had options, neither fully constrained by the market but affected by the moves of its competitors. The oligopolist had to be strategic, because he must anticipate these moves. There was no law to govern this situation, which is why Simon declared oligopoly to be “the permanent and ineradicable scandal of economic theory.”17

For economists, the question raised was why certain markets deviated from standard models of perfect competition. Profits should be more than sufficient to animate the company, but certain industries were extremely profitable. That was because of a lack of competitive pressure, which was the result of the “barriers to entry”—the difficulty faced when trying to establish a new position in a market. The thrust of the economics approach to industrial organization was to find ways to reduce these barriers to make the markets more competitive. With his business school background, Porter saw an opportunity to turn the theory on its head. This was a natural stance for a student of strategy, taking the point of view of the company within the industry rather than the industry as a whole. Instead of asking how the system could be made more competitive, he asked how the unit within the system could exploit and even intensify uncompetitive elements to gain strategic advantage.

Following Ansoff in defining strategy in terms of “relating a company to its environment,” Porter devised a framework to help companies examine their competitive situation. The focus was still on providing a guide to a deliberative process for a large business, but he was more ambitious than Andrews, more focused than Ansoff, and less formulaic than Henderson.18 Porter identified two key issues. The first was seller concentration (what percentage of the market was controlled by the top four firms) and barriers to entry. Out of this came the “five forces framework” for analyzing an industry. The forces were competitive rivalry between firms, bargaining power of suppliers and of buyers, threats of new entrants and of substitute offerings. A number of factors were connected with each. The presentation was methodical and rigorous, offering basic principles and some specific tactics about how to maintain and improve a competitive position. To the critics who claimed that his analysis was too static, Porter replied that the five forces all needed to be watched precisely because they changed.

For Porter, strategy was all about positioning. The menu of strategies was small and the choice would depend on the nature of the competitive environment, with the aim of finding a position that could be defended against existing competitors and those trying to enter the market. Porter offered three generic strategies: staying market leader by keeping costs down, having a product that was sufficiently different that it could not be challenged by other competitors (differentiation), and identifying a particular part of the market where there were few challengers (market specialization). He argued that it was important to pick one of these strategies, stick to it, and never get “stuck in the middle,” because that would almost “guarantee low profitability.” Since the best position would be extremely profitable, there would then be sufficient resources to improve the position. The key thing was to find and exploit the imperfections in the market. In terms of the SWOT framework, this was about addressing opportunities and threats rather than strengths and weaknesses. There was very little interest in internal organization and the actual implementation of a strategy.

Porter’s method could be criticized for being deductive. He had plenty of examples of tactics used by companies seeking product differentiation or raising barriers, but these were illustrations of propositions derived from his theory. Some of his central claims about the generic strategies and the greater value to be gained by concentrating on market position as against operational efficiency did not seem to fit the evidence. As with all structural theorists the tendency was to assume that structure had “a strong influence in determining the competitive rules of the game as well as the strategies potentially available to the firm.”19 In practice the system was less rigid and certain than the theory assumed, and more susceptible to being transformed by truly imaginative strategies.

One striking feature of Porter’s approach lay in its political implications. This was not something he dealt with explicitly, but as Mitzberg noted: “If profit really does lie in market power, then there are clearly more than economic ways to generate it.”20 The closest Porter came to making the link between competitive position and government assistance was in noting how governments “can limit or even foreclose entry into industries with such controls as licensing requirements and limits on access to raw materials.” The key arena here was that affected by antitrust legislation. Porter was well aware of the issue, noting that companies under antitrust restraints might not feel able to respond to competitors attempting to take a small market share, or how large companies may use a private antitrust suit to harass small competi- tors.21 He warmed to the theme in his second book, Competitive Advantage, noting how these suits could put financial pressure on competitors. Here he also discussed how barriers to entry could be raised higher than would naturally occur, by such methods as forming exclusive agreements with outlets to freeze out competitors, tying up suppliers, and even working in coalition with other established firms.22 A number of the activities, he noted, were frowned upon by antitrust law and were the subject of successful suits. Porter insisted that he supported antitrust legislation,23 and it was also the case that there was a degree of uncertainty surrounding this legislation in terms of the vigor with which it was applied at any time, often depending on economic circumstances. This uncertainty was a major problem for the strategist, as what might seem acceptable behavior at one moment became unacceptable the next.

In the mid-1980s, Porter advised the National Football League (NFL) in its dispute with the United States Football League (USFL). He characterized the dispute as “guerrilla warfare” and suggested aggressive strategies, such as persuading broadcasters to break their contract with the USFL, poaching the USFL’s best players while encouraging the NFL’s worst to go the other way, and co-opting the most powerful USFL owners while bankrupting the weakest USFL teams. This was cited in evidence when the USFL sought damages from the NFL for its anticompetitive practices. Ultimately, it was agreed that the NFL had violated the law, although only derisory damages were awarded. Porter’s assistant acknowledged that legal issues had not been considered in offering advice; the NFL’s defense was that it had ignored the advice.24

A similar problem emerged with Barry Nalebuffand Adam Brandenburger’s Co-Opetition, an attempt to capture the insights of game theory for a popular audience. The title neatly captured the mixture of cooperation and competition that game theory addressed,25 although the neologism was not actually new.26 Their idea was that it would make sense to cooperate with other players in the industry to expand the business pie while competing over how it was divided up. They noted the complexity of relationships, not only with customers, suppliers, and competitors, but also complements—that is, other players with whom there was a natural cooperative and mutually dependent relationship (for example, hardware and software firms in computing). They discussed the advantages that could be gained by changing the rules of the game or by using tactics to shift perceptions of a position within the game. The influence of game theory was evident, but this was hardly a theoretical work. As with other practical work in the field it took some basic factors and reworked them in a variety of cases, offering readers some insight on how they might approach similar types of problems.

The more explicit recognition of the potential of cooperation, which would be natural in any other area of strategy, always risked appearing anticompetitive and falling foul of antitrust law. Nalebuff and Brandenburger thus celebrated Nintendo’s achievements in gaining a competitive advantage in the computer games market, which allowed them to overcharge their customers (eventually requiring them to settle in the face of a suit from the Federal Trade Commission). The way the analysis was structured led the authors to naturally favor the company over the consumer. Stewart sharply commented that they “praise one company after another for cornering markets and duping customers” before acknowledging antitrust concerns. He accused them of developing an approach to strategy which was about “how to arrange a cartel without having to enter a smoke-filled backroom, how to organize a monopoly without going to the trouble of bribing government officials, and, in general, how to make extraordinary profits without having to make extraordinary products.” As he noted, while they praised General Motors for its credit card strategy, which offered discounts to those who used it, Toyota, which did not bother with a credit card, was building better cars and eating into the market share of General Motors.27

John Rockefeller did not appear in the index to Porter’s Competitive Strategy. He might have found the language and concepts unfamiliar but as one ready to try every trick in the book to position Standard Oil, the broad thrust of the argument would have been well understood. The management strategists of the late twentieth century were operating in an environment shaped significantly by the great trusts of the nineteenth century and the progressive movement’s attempts to deal with them. The logic of any attempt to tame markets was to make life difficult for at least some competitors. While the first wave of management strategists ignored this issue, because they were dealing with firms that were in secure positions or close to the limits of their legal growth, this was not the case with the second wave, which—as exemplified by Porter—did not so much embrace competition as seek ways to subdue and circumvent it. The third wave embraced competition with enthusiasm.

If you find an error or have any questions, please email us at Thank you!