3
In 1977, business historian Alfred D. Chandler Jr. published The Visible Hand: The Managerial Revolution in American Business.1 The book tells the story of how successive generations of managers learned to make mass production into an engine of economic growth and corporate expansion. In Chandler’s time, the mass production firm had an air of inevitability, as though it were history’s final word on the question of economic efficiency. Success was taken to be self-perpetuating. A firm at the forefront of its industry—whether in cars, chemicals, or breakfast cereals—used the most efficient methods of production to earn enough to invest in the next round of expansion, and leveraged dedicated research laboratories and marketing staffs to mold the future in its image. Incumbency was a prize, and winners took all.
Twenty years later, this confidence had begun to wane. In 1997, Chandler’s Harvard Business School colleague Clayton M. Christensen published the Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail.2 Rather than boldly setting the terms of the future, the great corporation was now portrayed as perpetually vulnerable to disruption by the development of new technologies and markets. In this new world order, incumbency was a trap. Attempts at foresight—whether through sophisticated tests of the market or tried-and-true customer surveys—only produce a false sense of security. There is always some marginal technology, ignored by experts or dismissed as irrelevant, that can pose a challenge to dominant solutions. Instead of trying to anticipate developments or head off disruptions, firms must acknowledge deep uncertainty and open themselves to surprises.
These two books neatly encapsulate an epochal shift in governance still underway today across large economic and political institutions in the most dynamic parts of the world economy. For much of the twentieth century, firms and regulatory bodies operated under the presumption of a stable environment—or at least with confidence in their capacity to maintain stability. In the mass production economy, the trajectory of development was always more or less clear. With the stark rise of uncertainty in recent decades, however, both large corporations and government agencies in advanced sectors have mostly abandoned the pretense that they can predict the future. Instead they aim to make the most of conditions they cannot foresee, inviting a clash of multiple design approaches and embracing flexible organizational structures that can be reorganized as circumstances change. In a word, experimentation is now the order of the day.3 Since the turn of the millennium, many economic and political institutions have developed robust new methods to learn from the surprises as well as shortcomings that arise when production lines and regulatory frameworks—now built as much to reveal stress as to shield against it—meet unanticipated conditions.
The threat of climate change and challenge of decarbonization are surely among the most testing circumstances these institutions face. The central argument of this book is that institutions at all levels will almost certainly fail to meet this challenge if they do not rapidly develop these capacities for experimentation and learning. We must, in short, inaugurate a new era of experimentalist governance. But to do so, we must get clear about exactly what elements of emergent institutions contribute to their experimentalist success. This chapter distills the principles and practices of these new forms of organized provisionality, all in the hope of aiding their adoption where they are so acutely needed.4
Experimentalism is at once an organizational structure that routinizes self-monitoring and mutual correction between ground-level problem solvers and those orchestrating their efforts; a form of deliberation that uses doubt and disagreement to progress despite uncertainty; and a set of incentives that, by making it risky to bet on the status quo, encourages actors to adopt such structures and deliberative routines when the public interest requires. In this chapter, we examine experimentalism from the perspectives of each of its three aspects or components—governing organizations, deliberation, and incentives. But first we give some preliminaries, situating experimentalism within larger debates about governance over the last few decades.
Governance
If democracy is government by and for the people, governance is the mesh of public and private authorities—formal laws, informal procedures, and shared conventions—that determine in practice who regularly participates in democratic self-determination and under what conditions. To give only a few examples, it is governance that determines whether a government provides transportation or health services itself, or else contracts with private providers. It is governance that determines whether regulation is adversarial or cooperative, and what those terms actually mean. And it is governance that determines the terms and institutions through which workers can collectively bargain.
Because governance links formal law to informal practice and public to private decision-making, it is sensitive to changes in economic and political conditions often well before such changes prompt reconsideration of legislation or regulatory rules—if they ever do. In short, law tends to lag where governance leads. For this reason, governance is frequently the language in which think tankers, academics, and business leaders broach proposals for reform too urgent to be postponed, yet too speculative to be codified into law. From this point of view, governance is a kind of test bed for urgent and often highly consequential adjustments to new circumstances.
The rise of uncertainty is recasting the nature of the governance problem. For most of the last century, governance debates have been concerned with determining the identities and aligning the interests of principals and their agents. On this model, principals—whether the sovereign people, a legislative body, government administrator, controlling owner of a corporation, or corporate manager—have plans and projects. But to realize these plans, principals must rely on agents with interests of their own. If the background conditions are fixed, the principal’s problem is to devise an incentive system that induces agents to spend their efforts in achieving their goal rather than using the discretion their position affords to disguise self-interested behavior as dedication to the project. For instance, tying manager compensation to stock price increases links the interests of principals and agents when the principals are shareholders.5
When the background conditions are changing, however—as is typical of the episodes that provoke a concern with governance—the problem of aligning interests becomes entwined with the larger problem of determining which actors should be principals in the first place. In extreme cases, the roles can be reversed. For the architects of the New Deal, for instance, shareholders were too dispersed and removed from day-to-day operations to exercise control of the corporation. High-level managers became the principals of the corporation. The New Deal checked their discretion by subjecting them to regulatory oversight and enacting a reform of collective bargaining that allowed trade unions to exercise what later came to be called “countervailing” power. But decades later, as managers built empires by expanding into lines of business for which they were unprepared, activist shareholders, claiming to be the true corporate principals, broke conglomerates into separate pieces, paying for the takeovers with massive debts.6 Today shareholder supremacy is itself under attack for shortsightedly and unjustly favoring equity over other stakeholder interests, such as those of employees or suppliers.7 In the world of mass production, solutions changed with the identities of who is accountable to whom, but all addressed the principal-agent problem.
Even as these debates continue, the principal-agent relationship has been breaking down under uncertainty—first in pockets of the economy (like automobiles and semiconductors) and areas of regulation (like pharmaceuticals or air pollution) especially exposed to rapid change, and then in fits and starts more generally. In a risky world, actors can assign probabilities to outcomes and incentivize behavior that leads to the desired ones. Under uncertainty, it’s impossible to anticipate what the outcomes will be and hence impossible to assign them probabilities.8 In these circumstances, the challenge for governance changes radically. Under uncertainty, no actor alone can formulate plans with the precision and confidence necessary to engage agents for precise tasks, let alone devise methods to incentivize and hold them accountable.9 Conception cannot usefully be separated from execution. Actors instead have to collaborate, defining projects in the very process of trying to carry them forward, and using such progress as they make to reassess the feasibility of the undertaking along with the capabilities and reliability of their partners. In a stable world, in other words, agents execute steps in the plans of their principals, and the fact of their interdependence is covered over by the serviceable fiction that the principal is in control. Under uncertainty, however, planning and execution are inextricably connected; pooling their knowledge and experience, actors use the execution of provisional plans to revise their joint goals. Their mutual dependence is as open as the fallibility of their projects.
Over the last few decades, in advanced sectors of the economy and public administration, this kind of collaboration has been institutionalized and given legal form, keeping cooperating actors accountable to each other despite the fluidity of their relations and transience of their plans, and thus protecting them against the vulnerabilities that their interdependence creates. Organizations are designed so that anomalies and surprises touch off an investigation of possible improvements rather than efforts to enforce the existing structure against newly identified risks. Regulation and contracts, long intended to be proof against every imaginable contingency, are likewise being reconceived as open to learning through use. The form of administrative decision-making is shifting as well, from the promulgation of rules to the issuance of guidance, in recognition of the impossibility of certitude and therefore the acceptance that directives will routinely need to be revised. In the next section we sketch these developments, which together create the resources for the rapid learning and self-correction required for the vast transformation under uncertainty necessitated by decarbonization.
Experimentalism as Organizational Structure
The best way to understand experimentalism as an organizational principle is to see how it emerged as a response to the transformation of the mass production economy. In this section we sketch, in necessarily bold strokes, how we went from a world of self-perpetuating efficiency gains along a familiar trajectory to a world in which an open acknowledgment of uncertainty as the sheer inability to anticipate future states of the world is the precondition for success. In such a world, resilience is achieved not by buffering organizations against known risks but rather by designing them to be easily reconfigured when circumstances unforeseeably change. Resilience comes not by making organizations as self-sufficient as possible but instead by equipping them to cooperate with outsiders as needed. This transformation, pioneered and today still most conspicuous in the way the production of goods and services is organized and products are designed, thus goes hand in hand with more collaborative forms of contract and regulation as well.
THE FATE OF THE MASS PRODUCTION FIRM
The efficiency gains of the mass production corporation came through economies of scale: the greater the volume of output, the lower the cost of each unit produced.10 These economies of scale were achieved by successively larger investments in ever more specialized machinery and organization, but they came at the price of increasing vulnerability to shocks. The ever-larger investments were only profitable if the market for their increasing output grew apace; otherwise, extremely expensive equipment sat idle. As behavioral economist Adam Smith observed long ago, in achieving economies of scale, the division of labor is limited by the extent of the market.11 Disruptions in supply could also be as costly as shortfalls in demand. If the supplier of a key part suffered an accident or used the buyer’s dependence on a highly specialized product to extract better contract terms, production in a whole plant could be idled.
Chandler saw clearly that the mass production firm was organized and managed to reduce these risks. He did not foresee that in the long term, the precautions made it even more fragile and susceptible to shocks.
To minimize the risk of idle capacity, the firm looked past business cycle fluctuations and carefully calibrated expansion to the expected growth in the long-term demand for its products. Increases in the minimal, efficient scale of plants could thus be matched to increases in the reliable market. To minimize the risk of disruption in supply, the firm vertically integrated, making all of its key components and systems itself, and contracting for the rest with outside suppliers to produce parts to its exact specifications. To minimize the risk of disruption in the day-to-day flow of production, the firm maintained large inventories of work-in-progress on the shop floor. If a machine broke, the part it should have been making could be supplied from this stock until it could be fixed. These choices in turn entailed extremely long planning cycles. Exhaustive upstream planning was assumed to be the only way to avoid errors that would be discovered, if at all, at great cost during the downstream implementation of designs.
In recent decades, this strategy—and the precautions to which it gave rise—was first undone and then reversed in the advanced sectors of the economy and government. Starting in the 1970s, economic shocks—price spikes and quantity disruptions in essential inputs like oil and grains, along with the emergence of new competitors from developing countries—undermined confidence in the long-term planning on which steady expansion had depended. With the information technology revolution, the pace of innovation only increased, and as breakthroughs in one domain allowed advances in others, the trajectory of technological development within particular industries also became unpredictable. In-house suppliers, long at the forefront of their fields, were suddenly found to be using outmoded production technologies or making outmoded products.
LEARNING FROM CONTROLLED FRAGILITY
Amid the turmoil and confusion, it was gradually discovered that trying to minimize risk by exhaustive planning and buffering production against disturbances—effectively hiding alternatives and problems from view—could be extremely wasteful. Against the prevailing intuition and the experience of a century, it turned out that opening organizations to alternative designs early on and making the setup of production vigilant to stress, so that even small disturbances are immediately registered and the vulnerabilities they may presage quickly identified, could yield both better products and productivity gains superior to those achieved by increasing specialization, all without the penalty of increasing rigidity that specialization entailed. A kind of deliberately induced and carefully controlled institutional fragility, in which strain is immediately apparent, and its appearance triggers the investigation of causes and countermeasures well before it ramifies into catastrophe, showed the way to resilience.
Vertical disintegration was a key step. Firms sold their captive suppliers. General Motors, for example, long an exemplar of the logic and success of vertical integration, produced 70 percent of its parts in the 1980s and purchased the rest on the market. Twenty years later, the ratio was reversed.12 But relations with independent suppliers changed radically as well. Instead of forcing suppliers to compete to make a fully specified part at the cheapest cost, automakers chose to collaborate with outside suppliers at the outset in the design of new products, when their specifications were open-ended. General Motors was again—improbably, given its long record as a bully in the market—a bellwether; it acknowledged that the tables had been turned, and to have access to the most innovative technology, it has to compete to be the customer of the best suppliers, not the other way around.13
A notable feature of this collaboration with outside suppliers is that it’s carried out in parallel rather than in series—concurrently as opposed to sequentially through time. In this way, choices within each design area prod reflection on the desirable characteristics of other, complementary systems, extending the range of alternative conceptions of the product that can be canvased. It had been assumed that widening the scope of the search in this way would jeopardize the reliability of the final design, but it turned out there was no such trade-off. The comparison of alternatives within and across particular design domains resulted in such penetrating evaluation of their strengths and weaknesses that the final result was a better, more reliable design, arrived at faster than before because the process was concurrent.14
The organization and management of the flow of work on the shop floor was likewise transformed. Uncertainty dramatically increased the cost of holding large buffer stocks of work-in-progress inventory as a hedge against disruptions. Firms thus eliminated the buffers, at the limit producing one piece at a time. In this just-in-time or lean production, a breakdown at any one station stops all operations immediately; production can only resume when the disruption has been traced to its source and corrected, reducing the underlying variations and imprecisions that degrade quality. And just as with the expansion of the search for product design, this strategy had surprising implications. It had been assumed that the price of increased quality would be an efficiency-reducing slowdown in production. But it turned out that by building in this requirement to root out vulnerabilities, the firm reduced the incidence of breakdowns to previously unimaginable levels, eliminating the underlying sources of variability, and increasing quality and efficiency together.15
The vertical disintegration and short learning cycles that are characteristic of modern manufacturing are also coming to characterize other sectors of the economy. These developments are most salient in agriculture, especially in the diffusion of precision or no-till planting, which does away with plowing. Seeds are inserted essentially one at time to a depth and with a dosage of fertilizer adjusted to the conditions of each “pixel” of land. This method avoids soil compacting and erosion; the results are monitored pixel by pixel; and the conditions are adjusted after each planting to increase the yields by taking account of microfield variations in drainage or soil. Precision agriculture, like lean production, generates pressure for continuous improvement, and as part of that, purposeful differentiation of inputs, opening the way to the introduction of green technologies such as sprayers targeted to the low-dose application of pesticides and herbicides.16 We will return to these developments in case studies of the reform of dairy farming in Ireland in chapter 5.
TOWARD COLLABORATIVE REGULATION
These changes in production methods have profound effects on regulation as well.
Even in a stable world—one in which regulated entities in particular domains are relatively homogeneous—it is difficult to write enduring, generally applicable rules. The difficulty is that private actors have a clear understanding of the potential effects of regulatory measures on profits and the choice of technology, but regulators do not. Incumbent firms exploit this information asymmetry to escape costly requirements, and at worst, manipulate regulations to enhance their returns and bar new entrants. As the principal in this situation, the regulator seeks to elicit from agent firms the information needed to protect the public, without being “captured” or ceding control to the better-informed, regulated party.
Uncertainty does not eliminate the information asymmetry between the regulator and regulated entity; there are still important things a firm knows about its business that the regulator does not. But uncertainty does create shared ignorance of common problems, and the significance of the residual asymmetry dwindles as the importance of the threats to both the regulator and regulated entity increases. Regulation by rule making becomes unwieldy as the conditions under which broad measures are to be applied differentiate rapidly and often unpredictably.
Uncertainty that menaces both the regulator and regulated entity arises from three broad sources. First, the rapid codevelopment of innovative products and services by independent firms linked in complex supply chains can introduce latent hazards. Outbreaks of food-borne illness frequently originate in these conditions, for example; so too do potentially lethal defects in automobile airbags. Second, latent hazards can result from the long use of a product or facility under harsh and varying conditions that could not be fully anticipated when it was admitted to the market, as in catastrophes involving nuclear power generation, pharmaceuticals, deep-sea oil platforms, commercial aircraft, or the vast damage done by the long-term application of pesticides, herbicides, and fertilizers.17 Third, uncertainty can be deliberately created by a regulatory agency, as when it proposes to set a technology-forcing standard, and in doing so, obligates itself and the entities it regulates to extend the frontier of knowledge limits if compliance is to be eventually possible.
In short, uncertainty requires collaboration—typically in the form of a regime that closely monitors developments and sounds the alarm at the first sign that trouble is brewing. In incident or event reporting systems, for instance, failures and anomalies in products or production processes trigger an investigation of whether the events could be the precursors to worse outcomes. These systems also suggest immediate corrective actions and open inquiry into possible design improvements.18 In a different configuration, such regimes can be used for the close monitoring of ecosystems for the protection of particular species or the environment in general.19 In the case of technology-forcing regulation, the joint evaluation of possible standards in light of a shared understanding of developments at the technical frontier both encourages a race to innovate (as capable firms compete to shape emerging requirements) and makes broad acceptance of the eventual measures more likely. We encountered such a regime in the Montreal Protocol’s TOCs in chapter 2. We will encounter others in chapter 4 and many other places throughout this book, such as setting vehicular emissions standards in California in ways that advance innovation and encouraging important emission-reducing innovations in the DOE’s ARPA-E program.
Despite these collaborative elements, regulation is still a contentious business. That is why experimentalist regimes count on a particular suite of incentives—penalty defaults and conditional offers of support—to be described in detail later in this chapter. Penalty defaults loosen the grip of the status quo on firms, and make innovation and improvement more attractive as strategies. But to succeed, they also need the support and implicit incentives provided by collaborative regimes to reduce the risk and guide innovation. Both penalties and collaborative setting are required.
LEGALIZING PROVISIONALITY
Beyond production and regulation, decision-making premised on provisionality requires a new and distinct kind of law—precise and reliable enough to allow coordination in pursuit of joint goals while protecting the parties against the vulnerabilities that cooperation creates, yet not requiring in advance the kinds of commitments to particular actions that uncertainty forecloses. Since the 1990s, contract and administrative law have evolved in this direction. But like the changes in the organization of production, the legal changes have been patchy, and, unsurprisingly, most pronounced in areas like information technology and biotech where the new methods of production have advanced the farthest.
Take contract law first. In a standard contract, the parties specify exactly what is to be exchanged for what, and when. The failure to perform as obligated triggers traditional penalties, and in contracts between sophisticated parties, possible breaches and sanctions can be elaborately detailed as well. Under disruptive uncertainty, by contrast, parties are by definition unable to specify their respective obligations, as they would in a standard contract. Breaches and penalties can’t be specified in advance either when it is initially unclear what can be done. Instead they agree, as in experimentalism generally, on broad goals and a regime for exploring the most promising approaches by regularly evaluating the unfolding prospects of success. The regime provides for periodic, joint reviews of progress toward interim targets or milestones as well as procedures for deciding whether to proceed or not, and with what exact aim, along with mechanisms for resolving disagreements. By exchanging this information, the parties clarify the shared goal, and improve their assessments of one another’s capacities and reliability.
This kind of information exchange regime is at the heart of a new type of contract, dubbed “contracting for innovation.”20 Such contracts are routine, for example, in collaborations between a small biotech firm that specializes in some aspect of therapeutics or vaccines and a big pharma company that specializes in organizing clinical trials, regulatory approval, and the manufacture and distribution of eventual products. The information-exchange regime is designed to encourage deliberation through the full disclosure of all available information and full ventilation of all sides of a disagreement. Milestone determinations—resulting in decisions about whether to proceed or not with a project—are made by a committee composed of equal numbers of representatives of both firms with full knowledge of the day-to-day developments. Decisions are by unanimous consensus. A doubter can thus demand additional information and explanation simply by withholding assent. At the same time, obstinacy or manipulation will quickly be exposed. In the case of deadlock, the decision is escalated to high managers, who have no direct knowledge of the project and will decide on the basis of the evidence in the record. In this context, managers are likely to look with extreme displeasure on subordinates who hold up decisions and waste time with unfounded objections, and anticipation of this reaction disciplines the use of the—time-limited—veto power. The consensus requirement here is intended to be information or deliberation forcing, not, as in the UN General Assembly, to protect every party against unwanted change.
In time, the operation of the information-exchange regime reshapes the nature of the collaborative relation. As collaboration progresses, the parties’ mutual reliance increases. A partner that has not participated in the efforts will scarcely be able to cooperate as fully and effectively as one who has. This increase in mutual reliance increases the costs to each party of switching to an outsider, making the collaboration more and more self-reinforcing, and endowing it with a stability that, under less uncertain conditions, would have been provided by an initial, long-term commitment. In time, mutual reliance generates or activates norms of reciprocity. This trust is as much the result of collaboration as its precondition—just as the precise aims of cooperation are the outcome, not the starting point, of joint efforts.
Looking beyond contract law, administrative law has also seen a progression away from decision by formal rule making toward regulation by guidance. In this alternative framework, an agency tentatively advises private parties and public officials about how it intends to exercise its discretion or interpret its legal authorities, all while anticipating that conditions and obligations will change.21 Though notice-and-comment or “legislative” rule making had been anticipated in the Administrative Procedure Act (APA) that codified the operation of the New Deal administrative state, the widespread use of the form is relatively new, dating from the late 1960s and early 1970s.22 As the demand exploded for extensive regulation in areas affecting the whole economy, such as the environment and occupational health and safety, new agencies were empowered to make general, science-based rules premised on extensive public consultation, rather than following the traditional path of challenging particular practices one by one in administrative proceedings. It was through this adjustment to the traditional path that guidance found an opening.
Notice-and-comment rule making is highly formalized, requiring an agency to explain its purposes in great detail, expose its evidence gathering and deliberation to public scrutiny, and explain its reactions to criticism. The judicial elaboration of the original, formal requirements made a demanding process unwieldy and often unworkable. To take just one example, in proposing a new rule, an agency has to elaborate all the arguments in favor of its proposal in advance, knowing that whatever might be learned from the public exposure of its thinking, a reviewing court will consider only those arguments, in their original form, in judging the legitimacy of agency action. By the 1980s, for agencies such as the EPA, the strategy was to consult key stakeholders before announcing a proposed rule, and treating the formal proceeding as a kind of Kabuki stylization of the actual process.23
With increasing uncertainty, however, agencies were often unable to marshal in advance the kind of conclusive arguments that were formally required. But reasoned decision-making was not completely stalled; administrators were not left to choose among alternatives by “throwing darts,” as some commentators suggested.24 Rather, agencies pressed ahead by measured action, building regimes and making decisions that were provisional and calculated to lead to better-informed next steps. To make coordination possible without foreclosing the possibility of early and repeated revisions, agencies turned from rules to guidance—another category of decision-making slumbering in the Administrative Procedure Act.
Guidance, in sharp contrast to notice-and-comment rules, can be issued or amended quickly, with little, if any, formal process. Because of this informality, guidance does not have “the force of law” and “power to bind” private parties formally accorded to regulatory rules. Instead, as one scholar puts it, guidance is “only a suggestion—a mere tentative announcement of the agency’s current thinking about what to do in individual adjudicatory or enforcement proceedings, not something the agency will follow in an automatic, ironclad manner as it would a legislative rule.” Guidance thus not only permits but also demands flexibility: “If a particular individual or firm wants to do something (or wants the agency to do something) that is different than what the guidance suggests, the agency is supposed to give fair consideration to that alternative approach.”25 Similarly, while an agency may choose to depart from its guidance without formal process, it should in principle give a reasoned explanation for such departures.26 Guidance is therefore a tool for measured action.
The issuance of guidance is now a dominant form of administrative decision-making. In a recent study, leading public and private administrative law practitioners concurred that guidance is “essential to their missions.”27 A former senior Food and Drug Administration official could not “imagine a world without guidance,” for example, and according to a current official of the EPA, guidance is “the bread and butter of agency practice.”28 Guidance in its countless forms—advisories, memos, interpretative letters, enforcement manuals, FAQs, or even highlights—is nowhere systematically collected, but a rough estimate is that the number of pages of guidance that agencies produce overshadows “that of actual regulations by a factor of twenty, forty, or even two hundred.”29 In chapter 4, we will see this kind of guidance at work as the EPA grappled with the problem of how to control sulfur emissions from power plants. Its statutory authority was based on the old idea of administrative oversight, but when the EPA was most effective, it operated in guidance mode.
Though guidance has become a dominant form of administration, we should note that it remains controversial. Guidance can certainly be abused. To advance their institutional interests or those of favored constituents, administrators may use guidance too flexibly, escaping the procedural requirements by which policy is normally changed in disregard of the consistency required by the rule of law. A related fear is that because the provisionality of guidance documents makes them difficult to challenge in court, agencies can use guidance to evade not only the preissuance notice-and-comment process but also postissuance judicial review.30 Over the years, well-established groups of practitioners have proposed working solutions to these problems, but debates continue over the proper limits to the use of guidance and how best to ensure that those limits are respected. In the United States, these debates are entangled with a larger constitutional division between the Left and Right over the legitimacy of the administrative state, and thus they are not likely to end soon.31
The practical conclusion we draw is that administrative law’s use of avowedly provisional decision-making will continue to expand as it has over the last few decades, even while important questions remain open.
Experimentalism as a Form of Deliberation
As we have seen, experimentalism has become a new kind of organizational principle. But because experimentalist organizations embrace uncertainty, they inevitably invite doubt—doubt about whether the rules they are currently following serve their goals, and whether the goals themselves should be reconsidered. In this section we consider experimentalism as a kind of deliberation, the distinctive form of discussion by which these doubts are clarified.
A working definition of uncertainty is the condition in which experts disagree, and that condition is the starting point of deliberation in experimentalist settings. Between doubts about the facts and doubts about the theory, no one knows for sure. Deliberation takes such disagreement for granted, but it does not devalue expertise. It treats experts not as repositories of fixed wisdom but rather as guides to investigation. Lay experience of problems and possible solutions can be as valuable in such inquiry as the contending views of the experts themselves.
In experimentalism, deliberation is typically organized as peer review, in which actors of equal standing—all with experience of the problem, though of different kinds, and all with a stake in the outcome—evaluate an identical situation, consulting experts as needed. Deliberative peer review does not produce certainty. But it does dispel enough doubt to enable action that both addresses the problem and yields information about how to better the response: measured action.32 Such deliberation is a deeply collaborative venture in which participants expect their minds to be changed and learn from their differences. In this kind of deliberation, Dewey’s shoemakers and shoe wearers get down to cases. Expertise is neither venerated nor vilified but instead seen as the indispensable resource it is.
Though deliberation has a venerable history in both political theory and democratic practice, it has recently come in for heavy weather. Amid ever-spiraling political polarization and a crisis of disinformation, many have raised doubts about the possibility of deliberation, not only among the public at large, but within hallowed institutions like the US Senate specifically designed for that purpose. This skepticism finds implicit endorsement in a large body of recent academic work that focuses on the irrationality of individual decision-making—ignoring institutionalized reasoning as a corrective or substitute for individual choices.33 Indeed, where group decision-making is mentioned in this literature, it is mostly linked to the dangers of groupthink, in which the pressure to conform compromises deliberation and may even open the way to extremism.34
Against this backdrop, our argument that deliberation is key to decision-making under uncertainty may seem like magical thinking. But while there are grounds for this skepticism, there are also good reasons to resist it. In fact, doubts about the possibility of deliberation in public life have prompted new research, coupled with demonstration projects, suggesting that deliberation on the lines of peer review is far easier to organize than concerns with polarization may lead us to expect. In studies of “deliberative polls,” for example, a group of some 125 citizens, randomly selected to mirror the electorate as a whole, is provided with curated, balanced materials on a particular, salient controversy. Participants spend several days considering the topic in randomly assigned small groups, interspersed with plenary sessions in which participants engage experts on questions that arise in group discussion.35 The back and forth in deliberative polls between information gathering and reason giving about the implications of what is found closely approximates the process of peer review, with the qualification that there is more deference to experts as guarantors of sound, uncontroversial thinking in the polls than in peer reviews. Deliberation is convened precisely because expertise alone has hit limits.
The results of these studies are encouraging. Surveys of participants’ views before and after the process regularly show improved understanding of the issue along with considered shifts in views that belie the effect of groupthink.36 However much one doubts that these deliberative processes can be massively scaled up to transform electoral democracy, as their most determined supporters suggest, the success of deliberative polling in many countries shows that under broadly favorable conditions of the kind readily available in experimentalist settings, deliberation comes naturally.37
To illustrate how deliberative peer review can take root in practice, we turn now to a case study by social scientist and law professor Daniel Ho that explicitly tests the performance of experimentalist peer review in the reform of a food safety program facing significant challenges in Seattle and King County, Washington.38
A CASE STUDY IN PEER REVIEW
The problem of frontline discretion is inherent in bureaucracy and high on the list of attributes said to be its ruin. Rules are made at the top of hierarchies, but interpreted and applied at the bottom. It is the immigration judge deciding on an application for asylum or administrative law judge reviewing a case for Social Security disability benefits who determines what the rule maker intended. If anything, there is even more room for discretion when frontline decisions are taken informally, on the spot. The teacher behind the closed classroom door or police officer on the beat—sometimes called street-level bureaucrats—make nearly unobserved decisions deeply affecting the life chances of those before them. The obvious remedies are more detailed rules or fewer but unconditional ones. Both fail. Multiplying rules only leads to conflict among them and invites further discretion, while mandatory and uniform requirements, applied without regard to context, can have disastrous consequences, as in the case of mandatory sentencing requirements in some US criminal proceedings. When people give up on bureaucracy and large organizations generally as a way of solving any but the most routine problems, the apparently intractable governance problem of frontline discretion—high-level principals can’t effectively direct the action of low-level agents—is often the reason why.
In 2014, just before the Seattle experiment, the city’s food inspection department, operated jointly with King County, was this kind of street-level bureaucracy. Inspection styles varied from pedagogical to punitive. One of the fifty-five inspectors assigned an average of 1.8 “red points” per inspection—signaling a critical violation, such as failure to wash one’s hands, thereby increasing the risk of food-borne illness. At the other extreme, ten inspectors assigned on average more than 10 red points.39 The rotation of inspectors to new areas had little effect on the outcomes; the person, not the place, was determinative. Some colleagues filed grievances about the practices of others or went on TV to accuse the department of willfully ignoring violations in ethnic restaurants.
Food inspection in Seattle was inaccurately lenient as well as inconsistent. In neighboring Pierce County, county inspectors (more focused on quality assurance) and Food and Drug Administration inspectors found more violations in chain restaurants than were found in restaurants of those same chains in Seattle. Since franchisors (whose reputation depends on the consistency of the customer’s experience) go to great lengths to ensure that franchisees maintain standards, faults were most likely as frequent in Seattle as in Pierce, but Seattle inspectors found them less often. A broad review of the food inspection program in 2014 brought the underlying weaknesses of the system to light and prompted Seattle, aware of Ho’s earlier work on food inspection, to ask for a menu of options, from which the city then chose the introduction of experimentalist peer review with a randomized control to allow for the careful evaluation of results.
The design was to have two inspectors—the peers—spend one day a week together independently evaluating restaurants or other establishments at the highest level of risk for food-borne illnesses, and adjusting their understanding of what they had seen through a discussion of disagreements after each inspection and over lunch—that is, via deliberation. Pairing inspectors eliminated the possibility that divergent judgments reflected different facts. Supervisors as well as frontline workers were included in the experiment to emphasize, as experimentalism envisages, that reform was to encourage learning throughout the organization and allay any suspicion that the true purpose of the initiative was to increase management’s control of the frontline staff.40
The original reform design was modified a few weeks after the start of the experiment on the discovery that disagreements were even more frequent than anticipated—peer reviewers were disagreeing on the code implementation 60 percent of the time—and that the disagreements often involved thorny, red point questions that pairs could not resolve in discussions. Weekly “huddles,” originally intended as brief meetings to resolve logistic issues, were expanded into problem-solving sessions to consider hard cases.41 The huddles included all the frontline inspectors and various levels of managers participating in the peer review experiment.42 They collaboratively tackled questions of code interpretation, occasionally seeking additional advice about food science and law from Ho’s team at Stanford or about specialized topics like the organization of outbreak investigation from outside experts.
While a few questions could be resolved solely by a close analysis of the Food Code, ambiguities in words were often linked to ambiguities of substance. Was vacuum-packed meat, sealed in a plastic bag, sufficiently separated from food stored immediately below? What about shelled eggs in the same position? In the huddles, the peers worked toward resolutions of these ambiguities by evaluating their own experiences with the support of the teams’ investigations. Pictures of actual cases were frequently used to keep the particulars of problems clearly in view.
The huddles proved to be the engine of deliberation in the experimentalist reform, providing the kind of forum for discussion of concrete differences among informed and engaged participants, in consultation with experts, in which minds are apt to change. The consistency of inspection judgments improved as a direct result of huddle deliberation. The rate at which peers disagreed on red meat violations, for instance, fell almost to zero during the weeks in which the huddles focused intensively on food contamination. The accuracy of inspections improved as well. King County scores converged with those of Pierce County, which had already undergone more extensive quality assurance efforts. Tellingly, the convergence was driven by the greater willingness to award red points on the part of inspectors habitually hesitant to do so—members of the “lenient” group in King County.43
The huddles changed the nature of the food inspectorate as an organization. The organizational change involved a shift in the understanding of inspection from a “checklist” approach in which the frontline worker decides they are not seeing a code violation, to risk assessment, in which the staff, together with outside experts, establish the conditions to consider in judging whether a particular situation violates safety requirements. An encounter with taro root, a staple in the African, South Asian, and Oceanic diets only recently introduced though ethnic restaurants to Seattle, encapsulated the motive for the change. The checklist question was whether to class taro as a “potentially hazardous food” whose temperature has to be controlled to avoid the growth of microorganisms. But investigation showed that the risk of taro becoming contaminated also depends on moisture and the pH. A temperature requirement alone would be insufficient; in isolation, it could be dangerously misleading. The risk assessment answer to the binary choice posed by the code was instead to identify the risk factors and check that the establishment gives them due consideration.44
The new understanding of inspection as risk assessment also led naturally to an increasing reliance on guidance as the form of regulation: peer reviews call attention to consequential conflicts in code rules; the huddles, in collaboration with the specialist teams, elaborate risk-based clarifications and translate these, as part and parcel of developing training modules for the issue, into guidance documents; and inspectors embrace the guidance because it incorporates their own experience to surmount the difficulties they have faced.
PEER REVIEW, DELIBERATION, AND ORGANIZATION
This case study in Seattle illustrates four key aspects of the mutually supportive relation between experimentalist organization and experimentalist deliberation.
First, the introduction of peer review turns hierarchies into experimentalist organizations that are neither top-down nor bottom-up. With peer review, proposals to change the rules or their interpretation can come from either the frontline inspectors or supervisors. There is no clear superior. As bureaucracy gives way to continuing exchange across levels of the organization, the governance problem of street-level bureaucracy becomes tractable. When frontline workers collaborate in revising the rules, they exercise discretion in the open, justifying their choices with good arguments.
Second, deliberative peer review depends on the continuous interplay of disciplined investigation and reasoned interpretation of what it brings to light. Until huddles allowed consultation with experts, disagreements among inspectors, many with years of experience, went nowhere. Consultation resolved disagreement, though often by recognizing the need for risk assessment, which entails continuing inquiry into possibly dangerous circumstances and what to do about them. In this setting, experts do not crush deliberation with pretensions to technocratic authority. Rather, they enable it.45
Third, there is a feedback loop: experimentalist organization encourages deliberation, which in turn encourages further experimentalist reform. The upshot is an institutionalization of provisional decision-making. Introducing the peer review of inspections exposed disagreements whose persistence, having become routine, had hidden defects in decision-making from view. Coming to grips with those disagreements drew attention to the defects; grappling with defects led to deliberation in the huddles; that deliberation gave rise to new routines in inspection and training, and a corresponding shift to administration by guidance in recognition of the need to regularly revisit practices.
Finally, peer review revealed a capacity for self-determination on the part of frontline workers that belies widespread skepticism about deliberation. Long-serving workers, unionized in the public sector, with job security and a history of internal strife, might not be expected to change their ways. But they did. Where the frontline workers saw traditional quality control as “top down,” they saw peer review as “bottom up; we get our hands greasy.” It helped them reconnect to their profession and peers. Huddles allowed open discussion of complex and controversial distributional issues, such as whether particular regulations unduly burdened ethnic restaurants, which had been talked about furtively or not at all. Ho is emphatic that “the group dynamic of peer review may be a particularly effective way to unsettle and disrupt longstanding habits. Individuals can be more open to change than one might think.”46
Much of this book will be concerned with frontline workers—those who put policies into practice in making goals concrete and operational. But we will use the term in an expanded sense. As uncertainty increases, goods and services are produced more and more collaboratively, often with the participation of those who use them, and the scope of frontline work increases. In a medical setting, for example, the frontline workers or street-level bureaucrats are traditionally doctors and nurses. Today the group would include patients and patient support groups as well, as they will frequently participate in elaborating new therapies, or testing the safety and effectiveness of new drugs. This difference in the scope of the term aside, frontline workers, however subordinate they may be in society or their organization, are expert in some aspect of the problems that concern them. That expertise may be distinct from, and in some ways more limited than, the expertise of the superiors authorized to make rules. Yet it is often key to achieving the desired outcome. If the account here holds true, as uncertainty and the impossibility of specifying outcomes in advance generate pressure for collaboration, bureaucracy will dwindle, and with it the importance of street-level bureaucrats. Frontline discretion exercised by peer review, though, will become all the more important.
Experimentalism as a Set of Incentives
We have now explored experimentalism as both an organizational structure and form of deliberation. In this final section, we consider experimentalism as a distinctive set of incentives for experimentalist action.
To see the necessity for incentives, consider the following thought experiment. Suppose that provisional decision-making can be institutionalized and given legal form, and that within these structures, deliberation makes disagreement under uncertainty a wellspring of progress. These structures and routines force consideration of how things could be different—or more bluntly, what is wrong with the way things are now. This process is intrusive and disruptive by its nature, however much it aims at improvements that will ultimately lighten the load. Even if lead actors are already engaged in related activities in some domains, they may hesitate to extend the scope of that engagement to climate change or other areas of public concern. Laggards, already struggling to acquire needed capacities or resigned to doing without them, may reject new responsibilities out of hand. In short, even in a world where the background capabilities for experimentalist governance are increasingly recognized as essential, many actors—perhaps most—will prefer the status quo to new and demanding regulatory responsibilities, and the wavering or outright opposition of some will discourage participation by others.
A penalty default is a sanction designed to break the grip of the status quo and encourage participation in experimentalist governance problem-solving when the public interest requires it, but immediate self-interest does not. The most common penalty default is exclusion from a market by denial of a license or certificate of conformity with standards, or by regulations. But the sanction could include loss of decision-making control to an outside authority or other draconian punishment—in terrorem, as lawyers say. Such penalties so limit freedom of action that actors will almost always prefer to work toward a feasible alternative, however uncertain initially, that reflects their preferences than to suffer the sanction. Penalty defaults make it risky for capable actors to fail to make good faith efforts to achieve demanding results and less competent ones to fail to make any progress at all toward improvement goals, once they have been set. At the same time, penalty defaults do not sanction the failure to meet targets whose feasibility was unknowable at the outset, nor the good faith efforts of laggards to improve, even when the goals of improvement are not fully met.
To discourage obstruction while encouraging cooperation and improvement, penalty defaults combine an unconventional kind of penalty applied as an unconventional kind of default rule. Conventional penalties are designed for a world of risk rather than one of uncertainty. In this world, actors know the odds and are fully capable of carrying out whatever decision they make; the actors are, at least for the range of decisions currently before them, omniscient and omnipotent. Under these conditions, a penalty is set high enough so that the costs of violation (discounted by the probability of detection) just outweigh the gains from breaking the rule.
But this model is inappropriate in a world of high uncertainty and complexity. When the goal is to do the previously impossible, a comparison of the costs of compliance versus violation is meaningless as neither can be calculated. Even when the goals do become more settled, though, the failures of individual actors often result from incapacity, not from a calculation of costs and benefits. A firm or country making its best efforts to comply with a standard or regulatory requirement won’t have a better idea of what do because it now faces the prospect of a fine, and in many cases can’t simply hire an expert to tell it what to do either. In short, the usual incentives for fully informed and capable actors don’t work when actors face uncertainty and the limits of their own competence.
In place of a menu of differentiated penalties that make departures from rules cost more than they are worth, a penalty default substitutes a binary choice acknowledging the uncertainty of the situation: pursue the best, feasible alternative to the status quo given your situation, with support if necessary, or invite an outcome that will be prohibitively costly and burdensome. For capable actors, who stand to benefit as front-runners from advances that are incorporated into standards or regulations, the best alternative to the status quo is likely to be the pursuit of improvement in the desired direction, frequently in collaboration with others similarly situated. For laggards, the best, feasible alternative to the status quo is likely to be a renewed effort to catch up, but with the help of various technical and financial support programs.
Applying such sanctions as a default is also unconventional. Conventionally, a default is the rule that a judge or other actor imposes when the parties to an agreement have made no provision of their own for certain circumstances. In that case, the judge applies or devises the rule that maximizes the parties’ joint welfare on the assumption that this is what they would have done had they attended to the matter with the information as well as capabilities they are reasonably presumed to command.
A penalty default refers to the special case where the judge—say, in divorce proceedings—doesn’t have the information to devise the optimum rule, but the parties most likely do, even if they are reluctant to provide it.47 To induce the parties to cooperate, the judge presents a stark choice: agree on a division of assets and income, or the court will impose a settlement that will surely cause great hardship. While deliberation cannot be compelled, the prospect of this penalty makes clear the costs of failing to deliberate and incentivizes the parties to consider using what they know of their situation to devise a solution no outsider could anticipate.
In the same way, experimentalist governance penalty defaults induce reluctant parties to elaborate missing, default terms of cooperation, but with two differences. First, under uncertainty, even the most capable actors, alone or together, are far from omniscient in the sense supposed by theories of decision-making under risk. They will not know enough to adequately identify and gauge their choices, and that done, how to realize the ones they prefer. They will need to supplement what they do know by learning more through investigation, pooling efforts where interests allow; often this investigation will be too costly and risky for private actors to undertake without public support and facilitation. Second, there can be problems of capacity. Many actors are not omnipotent in the sense supposed by traditional decision theory either. Laggard firms, as noted, will need support in acquiring the capacities for self-monitoring that compliance with the new requirements demands.
Experimentalist governance penalty defaults are typically necessary, therefore, but in the absence of support may not be sufficient to loosen the hold of the status quo. Since the entities that provide these elements of a problem-solving regime may not be the same as those that can credibly threaten draconian penalties, the application of penalty defaults in experimentalist governance may involve coordination problems that don’t arise when the threat of an unworkable outcome is enough to get the parties to cooperate in the solution of a problem that was already within their reach.
Setting such problems aside for the moment, we can draw an important conclusion. The commitment to a demanding rule combined with the promise of continuous consultation, all under the threat of a penalty default, shifts the preferences of leading and lagging actors.
For leaders, this process creates an incentive to engage in a broad exploration of possibilities. Given the regulator’s commitment to act, inertia no longer favors the status quo. Since a rule is coming, the actors with the most confidence in their improvement strategies consult with the regulator in an effort to have their preferences incorporated into the rule, minimizing their own costs of adjustment and raising the costs to competitors with different approaches. Given vertical disintegration, suppliers specializing in the sought-after solutions will be among the most prompt and persistent volunteers. Their business model is indeed demonstrating the feasibility of the kinds of solutions they pioneer. And the prospect that at least one actor will cooperate with the regulator induces others to cooperate as well, both to secure a hearing for their own solutions and learn, through cooperation in the various review groups that the regulator establishes, what competitors are up to.
This broad participation ensures in turn that the regulator’s decision is informed by good estimates of short- and medium-term possibilities, and corrected as efforts at implementation warrant. Rules and revisions thus result from joint learning among actors, none of whom could devise a solution alone.
Laggards, for their part, will likely have kept to the sidelines as the standards are set. Once they see that compliance is feasible, many will consider bringing themselves into line, but will have trouble implementing and perhaps even fully understanding the adjustments that are needed. Imposing penalties for such violations, as though they were calculated or in bad faith, can have perverse effects. Actors who fail to meet obligations out of ignorance or incapacity may be driven to conceal shortcomings from authorities, inviting duplicity where there had only been good intentions. Penalty defaults, which eventually insist on compliance with regulatory standards, are typically forgiving in such situations, treating initial violations as presumptive evidence of incapacity, and leaving room for training, extension services, and other forms of support to weaker actors to help them learn to meet the requirements. By the same token, there are no penalties for reporting breaches of rules; indeed, timely reporting usually mitigates any eventual liability. Often the regulatory requirements are explicitly adjusted so that distinct and less well-resourced groups of actors—the most common examples are small firms and farms—can meet the necessary standards by procedures suited to their situation, and frequently in stages, over longer time periods than those set for compliance by more resourceful competitors.
But this forbearance and support has limits. Truly incorrigible actors—those that persistently fail to learn or demonstrate that they have no intention of doing do—are eventually subject to the full penalty default: exclusion from the regulatory regime and the associated market by the denial of a necessary permit, conformance certificate, or quality mark. Rents and other resources erode away from firms that persistently fail to adjust—rendering them impotent and then nonexistent. Penalty defaults are forgiving until they aren’t.
How do penalty defaults take shape in practice? They arise from many sources, and can be effective whether embodied in formal legal sanctions or not.
Frequently the source is moral outrage, although often the precise trigger for the outrage is unclear. Experiments show that consumers are little inclined to pay a premium for “ethically” produced goods—for example, “sweat-free” T-shirts made under certifiably high labor standards.48 But the same kind of consumers will often boycott firms caught flagrantly violating environmental or labor norms. International brands with a reputation for respecting these norms are of course particularly vulnerable to such reaction; knowing this, international NGOs have become extremely adept at calling attention to corporate breaches of widely shared moral convictions.
Mobilizing public opinion in this way, Greenpeace, as we will see in chapter 5, was able to establish a monitoring regime for corporate producers of soy and beef along with their suppliers in the Brazilian Amazon, effectively limiting some aspects of deforestation, long before governments and trade standard-setting organizations could begin to achieve similar results. Smaller companies in local communities, operating under a “social license” dependent on continuing acceptance of their behavior—pulp mills in isolated forest settings, for instance—are also exposed to moral pressure, all the more easily generated and effectively applied by neighbors and employees intimately familiar with the companies’ practices.49 In any case, the countless, successful campaigns—local, national, and international—by NGOs to hold companies accountable for their environmental actions clearly demonstrate that normative concerns generate penalty defaults across a wide range of settings.
A second source of penalty defaults is law. The Endangered Species Act is one US example: listing a species as endangered can stop development in its range. Others are contained in the Clean Water and Clean Air Acts. Under the Clean Water Act, the EPA can stop development surrounding a body of water if the inflow of pollutants exceeds the total maximum daily load.50 Under the Clean Air Act, the EPA can block development plans in urban areas that persistently fail to meet standards—a penalty so onerous that it has never fully been applied, yet is credible enough to force even the most reluctant cities to act.51 Development can only proceed if the affected parties establish a mitigation plan acceptable to the regulator. In both cases, the ground-level actors elaborate the actual solution, but are induced to do so only by the certainty that they will lose their autonomy if they do not.
A third source of penalty defaults are asymmetries of power and economic position. It is to the advantage of powerful (or simply capable) actors to impose standards, rules, or codes of conduct on themselves and weaker ones alike so that customers, citizens, or the world can see what separates them, and reward the well intentioned and high performing. A commercial illustration can be found in private phytosanitary and other quality standards (such as GLOBALG.A.P.) imposed by wholesalers or large retailers on producers of meat, leafy greens, or vegetables connected to global supply chains. The “California effect” to be discussed in connection with CARB in the next chapter and the corresponding “Brussels effect” of the European Union make market access contingent on compliance with “domestic” environmental regulation to set regulatory standards for outsiders, and indeed the world.52 The United States used its market power to protect dolphins (ensnared as the bycatch of tuna fishing in the eastern tropical Pacific) under the Marine Mammal Protection Act—initially by requiring countries exporting to the United States to adopt the same protective measures used by the US fleet.
In practice, penalty defaults can frequently arise from several of these sources in conjunction. The California and Brussels effects as well as the dolphin protection legislation and Montreal Protocol all wield economic and political power to back legal authority. Often the various sources of legitimacy are invoked in sequence. Moral pressure can lead a large firm to join with other producers and stakeholders in a roundtable to establish a code of conduct, including environmental and labor standards, binding on the whole supply chain. Public authorities can then make compliance with (some of) the provisions of the code a condition of access to the domestic market, thereby obligating foreign producers as well, and changes in the “private” codes are likely to quickly effect public laws—further blurring the distinction between them.
In sum, penalty defaults, though indispensable to experimentalist governance, are only part of the story, and generally the less important part. Unless provisional decision-making by deliberation—measured action—comes to be routine, penalties will be insufficient to induce learning and improvement. In a world where the actors are neither omniscient nor omnipotent, it would be foolhardy to assume otherwise.
Uncertainty as Routine
The combination of these distinctive organizational structures, forms of deliberation, and incentives, all adapted to uncertainty, provides the foundation for a new era of experimentalist governance.
In hindsight, we can see that the price shocks of the 1970s were only the beginning of an epochal shift. Since then, uncertainty and complexity have not only persisted but also increased—especially in the industries that will be central to addressing global climate change. Firms and regulators, we have argued, have come to accept both as a constitutive circumstance of decision-making. Firms in advanced sectors are already practiced in collaborative design with suppliers and forms of production based on short learning cycles and deliberate exposure to vulnerability. Regulators are shifting from the promulgation of fixed rules to the issuance of guidance that invites correction. The idea that initial decisions must be treated as preliminary and subject to revision in light of experience is becoming institutionalized and legalized. Actors revising projects and regulations under uncertainty need to deliberate, and they can learn to do so rapidly, even when they have spent years assessing compliance by checklist in a bureaucracy.
Many organizations, public and private, are adopting these methods under the press of events—because their customers or clients demand it, or because competitors plan to do so. Still, diffusion is patchy, as the merest glance at the gap between advanced and stagnant sectors in the US economy confirms. Where the public interest demands it, incentives can encourage an embrace of the new methods when self-interest doesn’t. In short, a world of shocks and surprises obligates us to find a way through uncertainty by learning from the unexpected, and better equips us to respond to climate change in the bargain.
In the next chapter, we show that institutions with these capacities are at the core of some of the most successful responses to major environmental challenges, both in the United States and globally.