Military history

Chapter 33

Red Queens and Blue Oceans

Here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast.

—The Red Queen in Alice Through the Looking Glass

Against the backdrop of intense competitive pressures, the role of the manager was increasingly thrown into relief. The rewards they could make at the top of major companies grew, but so did the risks of being fired. Their performance was being judged against ever more demanding standards, but short-term profitability of the sort that would impress investors increasingly became the most important objective by far. Investing for the long term appeared less attractive than selling off weaker units or taking aggressive action against all perceived inefficiencies.

The challenge to the role of the managers was posed by agency theory, derived from transaction cost economics. It directly addressed the issue of cooperating parties that still had distinctive interests. In particular, it considered situations in which one party, the principal, delegated work to another, the agent. The principal could be in a quandary by not knowing exactly what the agent was up to, and whether their views of risk were truly aligned. This issue went to the heart of relationships between owners and managers. The rise of managerialism reflected the view that the agents were the key people. In business and politics, the notional principals—the stockholders/ board members and the electorate/politicians—were transitory and amateurish compared with the fixed, professional elite. The progressive separation of ownership and control had been charted by Berle and Means in the 1930s. The question posed now was whether and how the principals could reassert control over their agents.1 If the agents did not wish to be so controlled, they had to take the initiative in demonstrating their value to shareholders or else find ways of releasing themselves from this constraint by becoming the owners as well as the managers.

Agency Theory

Michael Jensen, a Chicago-trained economist at Rochester, was impressed by a 1970 article in the New York Times by Milton Friedman that announced his arrival as an outspoken advocate of free-market economics. Friedman’s target was activist Ralph Nader’s campaign to get three representatives of the “public interest” on the board of General Motors. Friedman countered that the only responsibility of the corporation was to make profits so long as it engaged in “open and free competition without deception or fraud.” His arguments challenged the managerialism of the past two decades directly: the leaders of the big corporations should neither expect to act as agents of the state nor expect the state to shield them from competition. This led Jensen and a colleague, William Meckling, to try to turn Friedman’s plain speaking into economic theory. They found little to work with. They then made a big leap, taking what had become a contentious hypothesis when applied to finance—that markets were sufficiently efficient to provide a better guide to value than individuals, notably fund managers—and applying it to management. In this way, Justin Fox remarked, “the rational market idea” moved from “theoretical economics into the empirical subdivision of finance.” There it “lost in nuance and gained in intensity.” It was now seeking to use the “stock market’s collective judgment to resolve conflicts of interest that had plagued scholars, executives, and shareholders for generations.”2 By assuming perfect labor markets, so that employees cost no more than what they were worth to the company and if necessary could move without cost to an alternative job, their analysis concluded that the most important risks were those carried by shareholders.3

By 1983, because of the growing interest shown by economists, Jensen felt able to claim that a “revolution would take place” over the coming decades “in our knowledge about organizations.” Though organization science was in its infancy, the foundation for a powerful theory was in place. This involved departing from the economists’ view of the firm as “little more than a black box that behaves in a value- or profit-maximizing way” in an environment in which “all contracts are perfectly and costlessly enforced.” Instead he argued that firms could be understood in terms of systems geared to performance evaluation, rewards, and the assignment of decision rights. Relationships within an organization, including those between suppliers and customers, could be understood as contracts. Taken together they formed a complex system made up of maximizing agents with diverse objectives. This system would reach its own equilibrium. “In this sense, the behavior of the organization is like the equilibrium behavior of a market.” This insight, he argued, was relevant to all types of organizations. It led to cooperative behavior being viewed “as a contracting problem among self-interested individuals with divergent interests.”4

The prescriptive implication of this approach was that the owners had every reason to worry that their managers were getting distracted. Getting the interests of owners and managers back into alignment through monitoring and incentives required challenging the claims of managerialism. Deregulated markets were favored because they put at risk the positions of managers who were not delivering value for shareholders. Contrary to the pejorative connotations of hostile takeovers, the argument of Jensen and his colleagues was that these could increase the efficiency of the market. Managers dare not get sidetracked by loose and fashionable talk of multiple “stakeholders” but must keep their focus on the needs of the “shareholders” for profit maximization. While managers might complain about takeovers, they were a way of increasing value, redeploying assets, and protecting companies from mismanagement. “Scientific evidence indicates that the market for corporate control almost uniformly increases efficiency and shareholders’ wealth.”5 Companies were viewed as a bundle of assets, formed and reformed according to the demands of the market. The market was all-knowing, while managers were inclined to myopia. By 1993 Fortune could declare: “The Imperial CEO has had his day—long live the shareholders.”6

Adopting this view reduced the need for strategy and management. Once free-market determinism was adopted, then it was possible to “assume a management” as all other factors might also be assumed. It became just another “substitutable” commodity or, even worse, an opportunistic actor “in need of market discipline.”7 The manager’s obligations were not inwardlooking but only outward-looking, toward the shareholders. This was despite the fact that shareholders might be transitory and incoherent as a group and short term in perspective, or that efficient organizations had to be formed and nurtured if the moves the market demanded were to be implemented. The implications for the standing and vocations of managers were profound. The theory suggested that an organization’s history and culture were irrelevant, staffed by people who might as well be strangers to each other. Managers being trained in this theory would offer no loyalty and expect none in return. Their task was to interpret the markets and respond to incentives. Little scope was left for the exercise of judgment and responsibility.

Management: A Dangerous Profession

In the early 1980s, warnings were first heard about the potential consequences of such logic for the running of businesses. The malaise was identified in 1980 by Robert Hayes and William Abernathy, both professors at the Harvard Business School. American managers, they complained, had “abdicated their strategic responsibilities.” Increasingly from marketing, finance, and law rather than production, they sought short-term gains rather than long-term innovation. Particularly pointed, especially in the leading business school journal, was the assertion that the problem lay in an increasing managerial reliance on “principles that prize analytical detachment and methodological elegance over insight, based on experience, into the subtleties and complexities of strategic decisions.” Within both the business community and academia, a “false and shallow concept of the professional manager” had developed. Such people were “pseudoprofessionals” who had no special expertise in any particular industry or technology but were believed to be able to “step into an unfamiliar company and run it successfully through strict application of financial controls, portfolio concepts, and a market-driven strategy.” It had become a form of corporate religion, with its core doctrine that “neither industry experience nor hands-on technological expertise counts for very much,” which helped to salve the conscience of those that lacked these qualities but also led to decisions about technological matters being taken as if they were “adjuncts to finance or marketing decisions” and could therefore be expressed in simplified, quantified forms.8

At the end of the decade, an unimpressed Franklin Fisher observed, “Bright young theorists tend to think of every problem in game-theoretic terms, including problems that are easier to think of in other forms.”9 Even in oligopoly theory, to which game theory seemed most suited, Fisher argued that it had not made a fundamental difference. It remained the case, after game theory as before, that a “great many outcomes were known to be possible. The context in which the theory was set was important, with outcomes dependent on what variables the oligopolists used and how they formed conjectures about each other.” The effects of market structure on conduct and performance, he argued, had to take account of context. It was true that game theory could model these contexts, but this would not be in a convenient language. In response, Carl Shapiro argued that game theory had much to show for its efforts. But the prospect he offered was explicitly close to Schelling, suggesting not so much a unified theory but tools to identify a range of situations, ideas of what to look for in particular cases, but still dependent on detailed information to work out the best strategy. He also suspected “diminishing returns in the use of game theory to develop simple models of business strategy.”10 The subtle and complex reasoning described in the models was rarely replicated by actual decision-makers, who were “far less analytic and perform far less comprehensive analyses than these models posit.”11 Saloner acknowledged the challenge, especially if the models were taken literally and supposed to mirror an actual managerial situation with the aim of coming up with a prescription for action. He argued that “the appropriate role for microeconomic-style modeling in strategic management generally, and for game-theoretical modeling in particular, is not literal but rather is metaphorical.”12 It was not a distinction that was regularly recognized. It was all very well producing elegant solutions, but they were of little value if they were to problems that practitioners did not recognize and expressed in forms that they could not comprehend, let alone implement.

Although there were available academic theories that might assist in the design of organizations fit for particular purposes or at least explain why apparently rational designs produced dysfunctional results, nobody in business or government seemed to be taking much notice. Despite this growing divergence, the framework for research was difficult to change. Journals put a premium on established theories and methods. The apparently harder, quantitative work inspired by the economists assuming rational actors was dominant. Because modern software made large-scale number crunching possible, there was also a large database mentality. Research students were advised to avoid qualitative studies.13 The effects could be seen not only in the research but in the norms for behavior the standard models were suggesting. In 2005, Sumantra Ghoshal observed:

Combine agency theory with transaction costs economics, add in standard versions of game theory and negotiations analysis, and the picture of the manager that emerges is one that is now very familiar in practice: the ruthlessly hard-driving, strictly top-down, command-and control focused, shareholder-value-obsessed, win-at-any-cost business leader.14

During the 1990s, theories were developed for this new breed of manager, promising success that could be measured in profit margins, market share, and stock prices. They reinforced the challenge to the idea of manager as the secure and steady, but essentially gray, bureaucrat who knew his place in the large corporation, which in turn knew its place in the larger economy. They offered “a conception of management itself in virtuous, heroic, high status terms.”15 As James Champy, who was at the heart of the neo-Taylorist push in the 1990s, observed, “Management has joined the ranks of the dangerous professions.”16 The sense of danger reflected the greater demands being placed on managers, as they had to fear not only absolute but even relative failure. In the world as celebrated by Jensen, stockholders were demanding faster and larger returns, and predators had their eyes open for potential acquisitions. Survival and success required not only attention to customers and products but a readiness to be ruthless, to hack away at the least efficient parts of the business, to push away and overwhelm competitors, and to lobby hard for changes in government policy—especially deregulation—that would open up new markets.

Attitudes toward finance had been transformed. The oil shocks and inflation of the 1970s extended a period of modest returns on equities, combined with a traditional reluctance to carry excessive debt. By the end of that decade, new and imaginative ways of raising capital were found. Companies could grow ambitiously and quickly by issuing bonds. Those investors prepared to take a greater risk could anticipate higher yields. With capital plentiful, many companies grew through mergers and acquisitions rather than by developing new products and processes. Attitudes became increasingly aggressive, with the focus on extracting value from corporate assets that had been missed by others or which current owners were unable to exploit. The logical next step was for the senior executives of companies to challenge the ownership model which saw others get the greatest benefit from their achievements. Management buyouts liberated them from their boards, providing greater scope for initiatives while incidentally generating a lot of money. This surge of activity eventually ran its course as the deals became more expensive and the returns disappointed. The debt still had to be serviced, and if it was too large, bankruptcy followed.

Businesses were now judged by their market value. This should reflect their intrinsic quality and the longer-term prospects for their goods and services, but that was not always easy to assess and all the incentives were for those who held the stock, including managers, to talk up its current value. This made success tangible and measurable compared with the longer-term development of a business which might require patience and low returns before the rewards became evident. But assessments of market value were vulnerable to sentiment and hype, as well as downright fraud. The energy company Enron was the prize exhibit of the latter possibility. This risk was greatest in areas that were hard to grasp, whether because of the sophistication of the financial instruments or the potential of the new technologies. Within companies, any activities that might be holding down the price, not providing the value that was being extracted elsewhere, came to be targeted. Thus they encouraged remorseless cost cutting.

Business Process Re-engineering

The Japanese success over the postwar decades could be taken as a triumph of a focused, patient, coherent, and consensual culture, a reflection of dedicated operational efficiency or else a combination of the two. Either way, the pacesetter was the car manufacturer, Toyota. Having spent the Second World War building military vehicles, the company struggled after the war to get back into the commercial market. Hampered by a lack of capital and technical capacity and by a strike-prone, radicalized work force, Toyota would probably have gone bankrupt were it not for the Korean War and large orders to supply the American military with vehicles. It then began to put together what became known as the Toyota Production System. The starting point was a solution to labor unrest by a unique deal which promised employees lifetime employment in return for loyalty and commitment. Together they would work to establish a system which would reduce waste. Ideas for improving productivity could be raised and explored in “quality circles.” As this was a country where everything was still in short supply, a visit to a Ford plant in Michigan in 1950 left an abiding impression of the wastefulness of American production methods. Toyota aimed to keep down inventories and avoid idle equipment and workers. With excess inventory identified as both a cause of waste and a symptom of waste elsewhere in the system, methods were developed to process material and then move it on “just in time” for the next operation. Within Japan, Toyota’s methods were emulated and further developed by other companies. In one industry after another, including motorcycles, shipping, steel, cameras, and electronic goods, Western companies found themselves losing market share to the Japanese. Government policy, the need to start from scratch after the war, and a cheap currency all helped.

In comparison with Japanese super-managers, their American counterparts appeared as a feeble bunch. During the course of the 1970s, the management literature became more introspective as mighty American firms were humbled by the Japanese insurgency, not only losing markets to more nimble opponents but also being caught out by a business culture that was far more innovative. Although the momentum behind the Japanese advance came to a juddering halt after the hubris and boom years of the late 1980s, Western companies were determined to mimic the Japanese by radical approaches to their own operational effectiveness. These came first under the heading of total quality management (TQM) and the second as business process reengineering (BPR). Of the two, BPR was more significant in its impact and implications. The basic idea behind BPR was to bring together a set of techniques designed to make companies more competitive by enabling them to cut costs and improve products at the same time. The challenge was posed in terms of a fundamental rethink of how the organization set about its business rather than a determined effort to make the established systems run more efficiently. Information technologies were presented as the way to make this happen, by flattening hierarchies and developing networks. A close examination of what the organization was trying to achieve would lead to questions about whether goals were appropriate and whether structures could meet the goals. The idea seemed so attractive that Al Gore sought to re-engineer government while he served as vice president.

The underlying assumption of BPR, as with agency theory, was that an organization could be disaggregated as if it was a piece of machinery into a series of component parts, to be evaluated both individually and in relation to each other. It could then be put back together in an altered and hopefully improved form, with some elements discarded altogether and new ones added where necessary, to produce a new organization that would work far more effectively. Once an organization was viewed in these terms, there was no need for incrementalism. It should be possible to start from scratch and rethink the whole organization.

Re-engineering is about beginning again with a clean sheet of paper.

It is about rejecting the conventional wisdom and received assumptions of the past. Re-engineering is about inventing new approaches to process structure that bear little or no resemblance to those of previous eras.17

Thus, the history of an organization could be ignored and its old culture replaced with a brand new one. Workers would be indifferent and docile, or possibly even enthused by this process.18

At one level, BPR appeared strategic because it was demanding a fundamental reappraisal of businesses. But the main driver was not an assessment of competitive risks and possibilities or even internal barriers to progress but rather the potential impact of new technologies on efficiency. In this respect there were parallels with the coincident “revolution in military affairs.” There was the same claim that this was the start of a new historical epoch, the same expectation that affairs would be shaped by the available methodologies rather than the competitive challenge, the same presumption that technology would drive and everything else would follow, and the same tendency to take the underlying strategy for granted, assuming that opponents/competi- tors would accept the same path rather than starting with the strategy and working out what processes were required.

Michael Hammer, one of the figures most associated with BPR, provided the transformational tone when he explained the idea in the Harvard Business Review. “Rather than embedding outdated processes in silicon and software, we should obliterate them and start over. We should . . . use the power of modern information technology to radically redesign our business processes in order to achieve dramatic improvements in their performance.”19 Hammer teamed up with James Champy, chairman of CSC Index, Inc., a consulting firm that specialized in implementing re-engineering projects.20 Their 1993 book, Reengineering the Corporation, sold nearly two million copies. The rise of the concept was startling. Prior to 1992, the term “re-engineering” was barely mentioned in the business press; after that it was hard to escape it.21 A survey in 1994 found that 78 percent of the Fortune 500 Companies and 60 percent of a broader sample of 2,200 U.S. companies were engaged in some form of re-engineering, with several projects apiece on average.22 Initial reports were also positive about success rates. Consulting revenues from re-engineering were an estimated $2.5 billion by 1995. While Champy expanded CSC Index’s revenues from $30 million in 1988 to $150 million in 1993, Hammer gave seminars and speeches for high fees. Fortune magazine described him as “re-engineering’s John the Baptist, a tub-thumping preacher who doesn’t perform miracles himself but, through speeches and writings, prepares the way for consultants and companies that do.”23

There were both practical and rhetorical reasons for the success of the concept. At a time of tumult and uncertainty for industry, Champy and Hammer were able to play on the fear of being left behind, captured by Peter Drucker’s endorsement on the cover of their book. “Reengineering is new, and it must be done.” Hammer, in particular, pushed forward the message that however tough and brutal it all might be, the alternative was so much worse: “The choice is survival: it’s between redundancies of 50 per cent or 100 per cent.” Senior managers must hold their nerve: “Companies that unfurl the banner and march into battle without collapsing job titles, changing the compensation policy and instilling new attitudes and values get lost in the swamp.” The anxiety generated by such language could be used to press forward: “You must play on the two basic emotions: fear and greed. You must frighten them by demonstrating the serious shortcomings of the current processes, spelling out how drastically these defective processes are hurting the organization.”24

BPR began as a set of techniques. It was soon elevated into the foundation of a transformational moment. So while Hammer claimed that “just as the Industrial Revolution drew peasants into the urban factories and created the new social classes of workers and managers, so will the Reengineering Revolution profoundly rearrange the way people conceive of themselves, their work, their place in society.”25 Champy took this revolutionary theme a step further by arguing: “We are in the grip of the second managerial revolution, one that’s very different from the first. The first was about a transfer of power. This one is about an access of freedom. Slowly, or suddenly, corporate managers all over the world are learning that free enterprise these days really is free.”26 Speaking of the virtues of “radical change,” Champy described to managers the “secret satisfaction” of learning to do “what other managers in your industry thought to be impossible.” They would not only “thrive” but would also “literally redefine the industry.”27

Thomas Davenport, who had been director of research at the Boston- based Index Group, which was eventually turned into the CSC Index, was one of those closely associated with the development of the original concept. He later described how a “modest idea had become a monster” as it created a “Reengineering Industrial Complex.” This was an “iron triangle of powerful interest groups: top managers at big companies, big-time management consultants, and big-league information technology vendors.” It suited them all to make BPR appear not only essential in theory but successful in practice. The result was that specific projects were “repackaged as reengineering success stories.” Managers found that they could get projects approved if they used the BPR label, while consultants repackaged what they had to offer as BPR specialists, discarding the previous set of buzzwords.

Continuous improvement, systems analysis, industrial engineering, cycle time reduction—they all became versions of reengineering. A feeding frenzy was under way. Major consulting firms could routinely bill clients at $1 million per month, and keep their strategists, operations experts, and system developers busy for years.

As companies made layoffs, these too were rebranded as “reengineering.” Whatever the actual relationship, staff reductions “gave reengineering a strategic rationale and a financial justification.” Meanwhile the computing industry also had a stake in BPR as it encouraged large expenditure on hardware, software, and communications products.

It did not take too long for the bubble to burst. Too many claims had been made, too much money had been spent, and too much resistance was grow- ing—largely because of the association of re-engineering with layoffs—and it had all been accompanied, according to Davenport, by too much “hype.” “The Reengineering Revolution” took potentially valuable innovation and experimentation but added exaggerated promise and heightened expectation leading to “faddishness and failure.” The “time to trumpet change programs is after results are safely in the can.” Most seriously, the fad treated people as if they were “just so many bits and bytes, interchangable parts to be reengineered.” Dictums such as “Carry the wounded but shoot the stragglers” were hardly motivating, while young consultants with inflated salaries and even higher billing charges treated veteran employees with disdain. Whether or not this was a moment of historic change, employees were naturally inclined to think about protecting their own positions rather than enthuse about broad and expansive visions for the future of the company that could leave them without a job.

By 1994, the CSC Index “State of Reengineering Report” indicated that half the participating companies were reporting fear and anxiety, which was not surprising as almost three quarters of the companies were seeking to eliminate about a fifth of their jobs, on average. Of the re-engineering initiatives completed, “67% were judged as producing mediocre, marginal, or failed results.” As was often the fate of the examples cited in the bestselling management books, companies hailed as champions of BPR were discovered to have either gotten into serious trouble or abandoned the idea. The CSC Index itself was in jeopardy. Its credibility was not helped by revelations in Business Week describing an intricate scheme to promote what Michael Treacy and Fred Wiersema, two of the CSC Index’s consultants, hoped to be the next big book in the field, The Discipline of Market Leaders. The aim was to get it on the New York Times bestseller list. It was alleged (though denied) that employees of CSC Index had spent at least $250,000 purchasing more than ten thousand copies of the book, with yet more copies being bought by the company. The basis of the investment lay in the fees that were expected to come back to the company and consultants through their association with the “next big thing.” Treacy was giving some eighty speeches a year, and his fee had jumped up to $30,000 per talk from $25,000. The importance of these books was further illustrated by reports that they were ghostwritten to ensure maximum effect. The allegations backfired on CSC Index. The New York Times re-jigged its bestseller list and also took a contract away from CSC Index. The next year Champy, whose book Re-Engineering Management was also implicated in the scandal, left the company. The firm, which had six hundred consultants at its peak, was liquidated in 1999. Its rise and fall was a symptom of a business that had become dependent upon staying ahead of the latest fashion.28

Escaping Competition

Was it the case that the road to success meant emulating the methods of those who were already successful? Precisely because their techniques were well known it was likely that following them would result in diminishing returns. Like the military concern with the operational art, it offered little by itself if put in the service of a flawed strategy. This was why Michael Porter questioned whether Japanese firms had any strategy at all—at least as he understood the term, that is, as a means to a unique competitive position. The Japanese advance during the 1970s and 1980s, he argued, was not the result of superior strategy but of superior operations. The Japanese managed to combine lower cost and superior quality and then imitated each other. But that approach, he noted, was bound to be subject to diminishing marginal returns as it became harder to squeeze more productivity out of existing factories and others caught up by improving the efficiency of their operations. Cutting costs and product improvements could be easily emulated and so left the relative competitive position unchanged. In fact, “hypercompetition” left everyone worse off (except perhaps the consumers). For Porter, a sustainable position required relating the company to its competitive environment. Outperformance required a difference that could be preserved.29

The problems facing companies trying to maintain a competitive advantage when everyone was trying to improve along the same metric was described as the “Red Queen effect.” It was named after the line in Alice Through the Looking Glass with which this chapter opened. This was originally the name of a hypothesis used by evolutionary biologists to describe an arms race between predators and prey, a zero-sum game between species, none of which could ever win.30 In the business context, it tended to be used as more of a race between similar entities. So, for example, early and striking gains might be made by saving time on standard processes, but soon others would catch up and gains would become increasingly marginal. The comparison was with a war of attrition. By focusing solely on operational effectiveness the result would be mutual destruction, until somehow the competition was stopped, often by means of consolidation through mergers.31

If the main arena was full of increasingly worn and wan warriors desperately trying to land blows on equally exhausted competitors as they dismissed the walking wounded and tripped over company corpses, then the logic was to find a less crowded, less competitive, and much more profitable place. The history of business after all was one of the rise and fall of whole sectors and of companies within them. It was an arena marked by instability. Of the original S&P 500 companies in 1957, for example, only seventy-four were still on the list thirty years later. Much management strategy literature was addressed to those in charge of existing companies, whereas in practice the most important innovations often came with new companies, which grew with new products. As noted by W. Chan Kim and Renee Mauborgne, there were “no permanently excellent companies, just like there are no permanently excellent industries.” For this reason they argued that the hopeless firms were likely to be those competing without end in the “red oceans” instead of moving out to the blue oceans where they might “create new market space that is uncontested.” Those who failed to do so would go the way of many past companies and simply disappear or be swallowed up. They argued that the “strategic move” should be the unit of analysis rather than the company although they did not suggest that blue oceans were only found by new companies.

Kim and Mauborgne contrasted business with military strategy. The military was bound to focus in a fight “over a given piece of land that is both limited and constant,” while in the case of industry the “market universe” was never constant. Confusing their metaphors somewhat, they therefore argued that accepting red oceans meant accepting “the key constraining factors of war,” which were “limited terrain and the need to beat an enemy to succeed,” while failing to capitalize on the special advantage the business world offered of being able to “create new market space that is uncontested.”32 If their theory really depended on this idea of military strategy as being solely about battle, then it was off to a poor start. We have charted in this book how the desire to avoid battle except on the most favorable terms animated much military strategy. There was also a similar impulse at work here, the belief that the unimaginative plodders would stick with the most simplistic formulas, creating opportunities for the bold and the visionary to gain the advantage. Though Kim and Mauborgne acknowledged that red oceans were sometimes unavoidable, and that even blue oceans might eventually turn red, they made it clear that they found red ocean strategy fundamentally uninteresting. And here they fell exactly in line with the tradition in military strategy that sought to escape the brutal logic of battle and urged the application of superior intelligence to achieve political objectives while avoiding slaughter. There was the same infatuation with dichotomy, as if the choice was always to go one way or the other—direct/indirect, annihilation/exhaus- tion, attrition/maneuver, red ocean/blue ocean.

It was rarely denied that the orthodox route might at times have to be followed, but there was normally a clear implication that this could never satisfy the truly creative. As with so much writing on military strategy, the best way was illustrated by examples of success from companies that had transformed themselves and their industries, whether through meticulous plan, an empowered workforce, lateral thinking, bold re-engineering, or innovative design. The failures tended to be those who had stuck with orthodoxy, drifted in complacency, or moved from one crisis to another without ever getting a grip.

In an appendix to their book, Kim and Mauborgne developed a more analytical distinction between the red and blue oceans, now described as structuralist and the reconstructionist strategies. The structuralist approach derived from industrial organization theory, with Porter its most famous proponent. It was “environmentally determinist” because it took the market structure as given and thus posed the strategic challenge of competing for a known customer base. To succeed meant addressing the supply side. This meant doing whatever competitors did but better, relying on either differentiation or low cost. Sufficient resources might result in a form of victory, but the competition was essentially redistributive in that the share gained by one would be lost by another, which led to an attritional logic. The theory assumed exogenous limits. By contrast, the reconstructionist approach was derived from endogenous growth theory, which claimed that the ideas and actions of individual players could change the economic and industrial landscape. Such a strategy would suit an organization with an innovative bent and sensitivity to the risks of missing future opportunities. This addressed the demand side by using innovative techniques to create new markets. Those following a reconstructionist strategy would not be bound by the existing boundaries of the market. Such boundaries existed “only in managers’ minds,” so with an imaginative leap new markets might be identified. A new market space could be created through a deliberate effort. The wealth was new, and need not be taken from a competitor.33

In a later article, Kim and Mauborgne developed the distinction further, identifying the importance of not only a value proposition that would attract buyers but also a profit proposition so that money could be made, and lastly a people proposition to motivate those within the organization to work for or with the company. From this they defined strategy as “the development and alignment of the three propositions to either exploit or reconstruct the industrial and economic environment in which an organization operates.” If these propositions were out of alignment—a great value proposition but no way of making a profit or a demotivated staff—then the result would be failure. Only at the top of the organization, with a senior executive able to take a holistic view, could the propositions be developed. On this basis they argued that “strategy can shape structure.” The title marked the shift from Chandler, whose formulation was about the effect of strategy on internal organization, to the new quest to use strategy to change the external environment.34

This takes us back to Ansoff’s distinction between strategy as a relationship to the environment and strategy as decision-making with imperfect information. The broad thrust of business strategy came under the first heading. The second more campaigning form of strategy, which dominated the military literature, was put in a more subordinate position, a challenge of implementation. Porter argued that the environment shaped and limited a business’s strategic options; Kim and Mauborgne claimed that these limits could be transcended through imagination and innovation. Porter claimed that the competition could be beaten by either differentiation or price; Kim and Mauborgne claimed that it was better still to develop products in areas where there was no competition, but they then had to develop a business case and have the staff to make it work.

This view of strategy as a general orientation toward the environment offered a framework for evaluating all other endeavors within the organization. Strategy of this sort had to be long term, and it might have the elements of a plan, with an anticipated sequence of events geared to an ultimate goal. The strategy could be much looser than that, however, setting out a number of goals with some sense of priorities, available resources, and preferred means, maintaining considerable flexibility to allow for changing circumstances. How well either approach would work would depend on the nature of the environment. The more stable the less the freedom to maneuver and so less scope for a strategy of any sort other than one of internal adaption. Even a reconstructionist strategy would still be affected by responses from potential competitors who might appreciate what was going on or other actors who might be able to influence the demand for new products.

Such theories still lacked a formulation as compelling as Clausewitz’s portrayal of the dynamic interaction of politics, violence, and chance. There was not even a concept comparable to Clausewitz’s friction, although executives were always likely to experience their own versions of the fog of war. There were few incentives to dwell on such matters in a literature increasingly infused with promotions of particular strategic nostrums as the author’s unique product. The promise was of success following a true interpretation of these nostrums according to circumstances, and the will to see it through. The tendency therefore was to play down the unforeseeable factors that could frustrate the best laid plans, whether a rogue calculation in product design, a misjudged advertisement, sudden fluctuations in exchange rates, or a terrible accident. Moments could arise in business as in politics when long-term aspirations had to be put to one side in a desperate struggle for survival, as a reliable market evaporated or development process failed to deliver or debts were called in. At such moments, priorities would need to be clarified, help sought wherever it could be found, and exceptional demands made of the organization. Other types of events might require no more than midcourse corrections or a reappraisal of one element of the overall approach. Knowledge of a coming event—such as a presentation to investors, a product launch, or a meeting with customers—could raise issues that had hitherto been neglected or illuminate aspects of the changing environment that had been missed before.

The influence of equilibrium models from classical economics on business strategy remained strong, while alternative concepts of non-linearity, chaos, and complex adaptive systems, though picked up by military strategists, were less in evidence. An article by Eric Beinhocker pointed to the challenge. An open system constantly in flux, shaped and reshaped by many agents acting independently, could seem more relevant to companies than a closed system tending to equilibrium. For example, a characteristic of complex adaptive systems was described as “punctuated equilibrium,” referring to when times of relative calm and stability are interrupted by stormy restructuring periods. At such time, those whose strategies and skills were geared to the stable periods risked sudden obsolescence. Those who survived were likely to have prepared to adapt even if they could not be sure what adaptations would be required. Strategy, therefore, could not be based on a “focused line of attack—a clear statement of where, how and when to compete,” but instead on preparations to perform well in a variety of future environments. Small organizations with relatively few parts were unlikely to adapt as well as those with more parts and a larger repertoire of responses to new situations, but after a certain point the capacity to adapt would fall off as response times shortened. There was a new balance to be struck, between complete resistance to change on the one hand and oversensitivity to shifts in the environment on the other, between stasis and chaos.35

A strategy could never really be considered a settled product, a fixed reference point for all decision-making, but rather a continuing activity, with important moments of decision. Such moments could not settle matters once and for all but provided the basis for moving on until the next decision. In this respect, strategy was the basis for getting from one state of affairs to another, hopefully better, state of affairs. Economic models might find ways of describing this dynamic but were less helpful when it came to guidance on how to cope.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!