5

Experimentalism in Context: Ground-Level Innovation in Agriculture, Forestry, and Electric Power

In climate change and decarbonization—as in education, health care, and the provision of electric power—the problems are general, but the solutions are often local. What works in one place doesn’t work the same way or at all in another. Innovations at the frontier have to be adapted and sometimes transformed to suit the peculiarities of place. And because place-specific conditions are constantly changing, local adaptation rarely comes to an end. We call this ongoing adaptation of innovations to the economic, social, and political circumstances of a particular place contextualization.

The process of local adaptation is especially conspicuous and intrusive in climate change. The effectiveness of pollution mitigation measures varies with the seasons and climate as well as with new ways of working and living. At the same time, these measures intrude deeply into everyday life, changing how property is used, people get about, industrial products are made, or how much water is available. In climate change, think of contextualization as the way decarbonization takes root in the endlessly varied soils of everyday life.

This chapter presents three case studies that exemplify the kind of contextualization that will be necessary as industrial and agricultural systems are transformed along the lines needed for deep decarbonization. First, we consider Ireland’s contextualization of EU water pollution standards through a dynamic framework of local governance. Second, we examine California’s contextualization of renewable energy, highlighting the collaboration between state regulators and local utilities and the deployers of new technologies that are transforming the state’s grid. Last, we discuss Brazil’s efforts to contextualize the idea of sustainability by devising promising ways of reconciling growth and the protection of the environment in the Amazon.

These examples of contextualization run contrary to conventional thinking about the local deployment of laws and technologies. In standard accounts, the design or conceptualization of a new machine or policy is radically separated from the process by which it is installed or implemented locally. The aim is the local replication of a model or realization of a blueprint, and success is measured by the fidelity of the new instance to the original. These accounts portray the actual process of implementation as ineffable, drawing on tacit, local knowledge instead of explicit practices of deliberation. On this picture, as in the assembly of a complicated piece of furniture, instructions point much of the way, even if they must be interpreted in light of a user’s unspoken assumptions about what the diagrams mean to show, or how to work with the relevant tools and materials. Frustration is inevitable and idiosyncratic.1

In other words, in this standard account, implementation is ultimately a matter of brute improvisation; there is virtually nothing systematic to be said about it. Even studies that explore how developing countries implement technology from more advanced nations treat the struggle to configure equipment to local conditions as a daring uncertainty—a venture frequently undertaken only out of ignorance of the pitfalls ahead, and successful only because adversity inspires ingenuity.2 Likewise, efforts to implement legal frameworks are conventionally seen as raw struggles for power—an extension of the political fights that led to the measure in the first place. The struggle may shift the outcome closer to the original preferences of one side or the other, but it does little or nothing to change views of a good solution.3

This view of the spread of new solutions has been repeatedly challenged empirically. From an experimentalist perspective, the premise that we can separate the conception and execution of large and complex projects is theoretically dubious. As the case studies in this chapter show, innovation spreads not by the installation of finished products or transposition of fixed legal provisions but rather through the continuation of the initial process by which technologies and legal measures were designed. Installation and implementation shade into reinvention. The tacit choices about adaptation implicit in implementation become explicit as they are subjected to continuing, deliberate discussion. Under the current conditions in climate change, this shift can have implications for the process of local decision-making as well. As traditional regulation strains and sometimes breaks under the pressure of continual local revision, rules can be adjusted and reinterpreted. Ad hoc administrative adjustments thus give rise to changes in governance in the direction of experimentalism.

The heart of the contextualization efforts we recount here lies in a special kind of collaboration: joining experts of various kinds and ranges of authority with each other as well as with local community members in the kind of reasoned give-and-take more typically associated with the practice of science. These forms of inquiry are conducted outside the laboratory and therefore are sometimes called “cognition in the wild.”4 But they are as disciplined as science in the use of peer review and constructive criticism. Deliberation in the wild does not eliminate struggles for power, any more than the practice of science fully banishes it from the lab. But the case studies show that when conventional alternatives have failed and penalty defaults loom, the availability of a deliberative framework can transform raw disputation into meaningful local learning.

The European Water Quality Directive and Irish Agriculture

Our first illustration of effective contextualization is Ireland’s management of agricultural pollution over the last two decades. Within environmental regulation, nonpoint source pollution provides the best example of the uncertainty that results when familiar conditions, each well understood in isolation, combine in unforeseen ways. Emissions from large polluters, such as power plants or sewage treatment facilities, are relatively easy to detect and control. Much more problematic are intermittent emissions from diffuse sources, such as the runoff from sporadic detergent use in scattered households. Agricultural runoff from manure, excess fertilizer, and pesticides is especially refractory because conditions vary widely among multiple sources and along the paths of pollution.

Today, Ireland is at the forefront of innovation in the control of agricultural runoff. The highly competitive dairy industry is a major polluter of water and, because of ruminants’ digestive gases, air. Ireland cannot meet its emissions targets—nor can the industry expand to realize its potential—unless this pollution is controlled. Though industry and governance have often been at loggerheads, at present they are highly motivated to find mutually workable solutions. Ireland’s innovative system for the collaborative, local governance of environmental problems is the product of these pressures and opportunities.

The conviction that dairying could be a modern engine of growth came late to Ireland. Through much of the twentieth century, Irish dairy farming, like Irish farming generally, was dominated by extremely small holdings, with limited export opportunities along with relatively low productivity and incomes. When Ireland joined the European Economic Community (the predecessor of the European Union) and its Common Agricultural Policy in 1973, membership expanded market access and raised prices, leading to increased output and productivity. The imposition of EU milk quotas in 1984 prompted consolidation, yielding fewer but more efficient and specialized dairy farms that were still small—measured by farm acreage and herd size—in comparison to industrial producers. Irish dairy cooperatives, which process the farmers’ milk, consolidated in this period and grew rapidly to become first-tier suppliers of ingredients to global consumer food firms, many of which built their own processing plants in Ireland as well.5 In 2017, the country—which accounts for less than 1 percent of the global milk output—supplied almost 10 percent of the world’s infant formula market and was the second-biggest exporter of infant formula to China.6 Altogether, Ireland exports 90 percent of its dairy output.7

Many factors have contributed to the success of Irish dairying. First, Ireland enjoys a significant competitive advantage thanks to its natural supply of grass. A typical large Irish dairy farm has the lowest cash cost-to-output ratio of the key international milk-producing regions, including the United States, New Zealand, and Australia.8 The key is homegrown grass feed, which is cheaper than purchased feed. The price of grass feed is also more stable than the price of purchased feed; relying on local grass thus shelters Irish dairy farmers against a substantial risk. Second, cows that pasture on grass, instead of consuming feed based on grains and soy, produce milk solids of superior quality.9 Indeed, the grazing cow is the emblem of food production at its most natural. Third, dairy farming requires proportionally fewer imports than the transnational pharmaceutical and information technology firms that dominated the Celtic Tiger boom before playing a large role in the country’s downturn during the Great Recession of 2008. Moreover, the profits of domestically owned dairy firms remain in Ireland, while those of high-tech firms are repatriated to foreign owners. Per unit of output and exports, the agri-food sector thus makes a larger contribution than the sectors dominated by foreign investment to the balance of payments and employment as well to regional and rural development.10

For all of these reasons, both the Irish dairy sector and its counterparts in various government departments have come to embrace the national system of grass-based dairying on family farms. Dairying has therefore found a central role in the overall development of the country—provided it can reconcile increasing efficiency with regulatory and consumer demands for environmental sustainability.11 Wariness of or outright resistance to climate change mitigation is giving way to active collaboration in developing new measures and institutions.

The high-level regulatory frameworks governing these efforts go back decades. The European Union’s Nitrate Directive of 1991 was one of the first measures to protect water quality from pollution by agricultural sources. Highly prescriptive, it set out precise nitrate concentration limits transcribed in each member state’s Nitrates Action Program. Farms that fail to comply can be fined or disqualified from the valuable EU single farm subsidy. Countries that fail to meet national limits must submit a plan for improvement to secure a temporary derogation of requirements or face the potential application of draconian sanctions typical of penalty defaults.

The Nitrate Directive with its concentration limits became an integral part of the encompassing 2000 Water Framework Directive (WFD), which is, however generally, much less prescriptive and more experimentalist in character. The broad objectives of the WFD are “good water” (including minimal pollution by the listed chemicals) and “good ecological status” (where the target status for each type of water body—such as alpine streams or freshwater lakes—is minimal deviation from an ecological norm associated with a pristine water body of that type). An “intercalibration” procedure assures that countries apply comparable standards.12 The basic unit of management is the river basin or catchment: the contiguous territory that drains into the sea at a single river mouth, estuary, or delta. Member states produce a six-year river basin management plan for each basin by a collaborative process in which public officials, experts, and stakeholders specify objectives as well as procedures for translating them into concrete activities. Each member state appoints a water director to oversee the execution of the plans. Together the water directors form a council that in consultation with the European Union’s executive body, the European Commission, directs the preparation of guidance—known as the Common Implementation Strategy—in the application of the directive. Until 2027, countries that fall short can submit a new river basin management plan at the end of each planning cycle on the grounds that the earlier approach proved technically infeasible, disproportionately expensive, or was obstructed by extraordinary natural conditions. Thereafter, as a penalty default, cost and feasibility will not excuse noncompliance.13 In short, central targets and corrigible penalties, though severe, are contingent on effort.

The implementation of both the prescriptive Nitrate Directive and more experimentalist WFD has proved frustratingly difficult. The adherence to “good practices” in agriculture, for instance, has often failed to produce improvements in nitrate levels; the effective, inclusive participation of local actors in the definition and continuing revision of the intentionally open-ended goals has been a major stumbling block in the application of the WFD. The Common Implementation Strategy has been revised many times.14

In Ireland, in particular, these kinds of failures triggered a series of research programs under the directives aimed at deepening the understanding and control of pollution flows at the catchment and field levels. These programs—buttressed by the findings of similar ones in other member states—have in turn helped generate a web of experimentalist institutions. The result is an integrated system for the local governance of water quality, greatly expanding public participation in environmental decision-making in the process.

The first and most important of these investigations was carried out under the Agricultural Catchments Programme, established in 2008 by Teagasc, the Irish agricultural research and extension service, in preparation for Ireland’s application for the derogation of some requirements of the Nitrate Directive. Six catchment areas, differing in soil types, geology, and types of farming, were selected to monitor and model the relations among farm management practices, the migration of nutrients from their source to various water receptors, and the resulting changes in water quality. Some three hundred farmers participated in the program, each supported in the development of a nutrient management plan by a Teagasc extension agent, who could in turn draw on the additional expertise of fifteen researchers dedicated to the project.

The Agricultural Catchments Programme found, surprisingly, that local variation in the absorption and drainage of nutrients renders general rules ineffective. Poorly drained fields with phosphorus values too low for cultivation may still pollute because of fast surface runoff, for example. Conversely, soils with phosphorus concentrations in excess of agricultural needs, and therefore likely to burden the environment, may not pollute at all because they are especially well drained.15 The policy implication is that a nutrient management plan should be only a starting point or provisional guide for an investigation in which a farmer and adviser collaborate in identifying the problems of a particular farm, devising remedies, and jointly monitoring the results.16 A second catchment study, undertaken by the Irish Environment Protection Agency (EPA), confirmed the Agricultural Catchments Programme’s findings, extended the investigation of the mechanisms of pollution transport to the geological structures beneath the soil layers, and showed that the disruption of pollution pathways is frequently a more effective means of mitigation than attempting to eliminate pollution at its source or to contain its effects at the receptor.

The new catchment program is part of a larger effort by the Irish EPA along with its partner institutions in water quality management to establish a cascading process of national, regional, and local consultations. The goal of this network is to select priority areas for intervention, and create local governance institutions to support the execution of the agreed-on interventions with the full and effective participation of the affected actors. The selection process and new governance institutions come together when the priority areas are subjected to a final, searching review in “local catchment assessments”: field-level examinations by the local actors themselves of the sources of pollution in given water bodies. This assessment determines the local work plan, specifying, costing, and prioritizing projects. Such a collaborative review of priorities is particularly important in rural areas. In such regions, many small and frequently diffuse sources of pollution can confound mitigation, and deep local knowledge is indispensable to a deliberate and consensual choice of which problems to attack, and in what order.17

These efforts were formalized in 2016 with the establishment, as part of Ireland’s transposition of the requirements of the WFD into national law, of the Local Authority Water Programme (LAWPRO) as a shared service between all local authorities in cooperation with the Irish EPA and other state bodies. LAWPRO supports catchment assessment and pollution mitigation by providing technical assistance to the stakeholders and helping each locale achieve inclusive engagement in the implementation of the river basin management plan.18 Agricultural problems detected by field assessments, for example, are referred to specialized agricultural sustainability advisers, who collaborate with the assessment team to help the implicated farmers to improve their land, farmyard, and nutrient management practices.19 The corps of sustainability advisers links the contextualization of water management at the catchment or territorial level to the contextualization of pollution mitigation measures on the farm, completing the nascent system of local governance.20

Three key lessons can be drawn from these experiences with the regulation of agricultural sources of water pollution.

First, penalty defaults and high-level framework legislation—in this case, the WFD—orient initial action and incentivize the creation of new governance instruments for the local contextualization of general policies. Making those institutions work in practice, in particular places, requires the continuing revision of the initial plans in light of experience. The recent flurry of institution building in Irish water regulation—the culmination of systematic investigation and hard experience—was preceded by many false starts and misdirected half measures. Experimentalist governance offers principles of design for these institutions, not blueprints for their construction.

Second, the Irish example illustrates how contextualization supplements rather than substitutes for higher-level decision-making and procedures. LAWPRO review modifies targets set by national and regional reviews, and how and in what order they are approached, but it does not dispense with them. Local authorities and stakeholders are not free to disregard the national framework; local interventions must remain accountable to more general procedures. In this setting, lower and higher levels interact dynamically, correcting and evolving with each other.

Third, these efforts demonstrate the value of blurring the distinction between the regulation and provision of public services. In the Irish case, regulators and farmers frequently work together. Dairy farmers in the catchment projects prepare nutrient management plans with the support of specialist extension agents, who themselves consult with catchment specialists. Farmers with environmental problems respond in collaboration with newly formed catchment assessment teams and a new corps of specialist sustainability advisers. Unlike traditional extension agents, who propagate consolidated expertise, these new specialists—jointly developing improvement plans with individual farmers and each other—reconsider their own understanding as much as apply it. Collaborative investigation is necessary precisely because the current rules and best practices run out; there is no more law to apply because of gaps or other limits in the pertinent texts and jurisprudence. Authorities must improvise instead of looking to codified standards because establishing what should be done goes hand in hand with developing the understanding and capacity needed to do it. The inclusion and empowerment of local actors thus becomes integral to regulatory problem-solving, with potentially far-reaching consequences for the design and operation of government. Indeed, the creation of LAWPRO, as an open-ended addition to local governance, suggests how regulation under uncertainty can reshape democratic participation.

Integrating Renewables on the California Grid

Now we turn to the pivotal industry in deep decarbonization: electric power. Any program of steep emissions cuts will involve massive electrification because it is easier to control or avoid emissions at power plants than at other sources.21 Though decarbonizing the electric power supply could involve a portfolio of technologies—including nuclear power and advanced fossil fuel plants that capture carbon dioxide pollution before releasing it into the atmosphere—the countries and subnational jurisdictions leading climate policy favor renewables above all other options. Deep decarbonization through electrification has thus become synonymous with the deployment of renewables.

California’s experience with the integration of renewables in the grid is a paradigmatic case in experimentalist contextualization. The incorporation of highly variable renewable power sources on a grid changes how the grid itself operates. The size and import of those changes is highly context specific—dependent on the interactions of local conditions with each other and the system as a whole. This makes local adaptation necessary because the economically efficient and reliable use of local resources can only be determined on the basis of actual operating experience. But it also requires the continuing, coordinated adjustment of system-wide rules and supervision to ensure that local modifications do not introduce instabilities into the grid. California has become a world leader in the integration of renewables as well as energy storage and complementary technologies by developing an experimentalist regulatory framework that creates incentives for the rapid, local deployment of innovations, while creating a common language for disciplined exchange and evaluation of experience across levels of the grid system.

As in the case of Irish water regulation under the WFD, however, the road to success was paved with failures. Confident early assumptions about the massive adoption of rooftop solar power were soon disavowed by practical developments; even as it became clear that feasible alternatives would involve much larger units typically under utility control, the regulatory implications of this shift remained elusive because vast new quantities of renewable power required integration on the grid. Conventional designs and grid operations could not perform that task reliably and efficiently; the deployment of novel technologies, such as batteries, would be required. Yet no actor in the system was capable of predicting how these new systems would operate, or which industrial forms and business models would be viable. Incumbents, the utilities, were essential to this process yet also saw threats in the shifting industrial organization. The key California utility regulator, the California Public Utilities Commission (CPUC), felt its way forward—just as CARB did as it transformed the regulation of vehicles in California—ultimately creating new industries and new behaviors by incumbents that led to the pioneering deployment of new technologies while taking account of the demanding requirements of contextualization.

California began promoting renewable energy in the aftermath of the oil crises of the 1970s. State policy, in tandem with the federal government, initially subsidized novel renewable technologies, and created small markets for wind, solar, and geothermal power.22 Because these new technologies were deployed only at a small scale, it didn’t much matter for the grid as a whole how the policies were implemented; the grid simply absorbed the modest, new power supplies. When renewable power ramped up during windy or sunny periods, other generators backed off and vice versa.

The power crises of 2000 and 2001 transformed the politics of renewable power. They showed that traditional strategies for supplying electricity were flawed, opening the political space for new ideas. An amalgam of environmental groups, organized labor, and more conservative political interests concerned about energy security, reliability, and cost sought to diversify the sources of electricity. Renewable power fit the bill. In 2002, legislators set the goal of achieving 20 percent renewable power supply by 2017.23 As renewable technology advanced at a blistering pace and the renewables lobby gained power, the state set more ambitious goals. By 2018, California was committed to getting at least 60 percent of its power from renewables by 2030, with 100 percent of electricity coming from zero-emission sources (including renewables) by 2045. Such “100 percent clean” mandates, adopted in New York among other states, have become models for planned, nationwide clean energy goals.24

These increasingly ambitious goals assumed that power generation technology would keep improving at rates fast enough to keep costs in check. Rapidly expanding global markets produced economies of scale that made this plausible. The Energiewende, dating from the 1980s, was subsidizing the rapid expansion of Germany’s solar market, making solar energy increasingly attractive economically.25 The costs plummeted further as production shifted to China from the 2000s.26 The cost of solar photovoltaic panels per kilowatt of output decreased faster than for any major electric supply technology in modern history.27

As the technological frontier expanded, advocates for renewable energy deployment and state regulators largely assumed that meeting California’s ambitious goals would be largely a challenge of implementation in the traditional sense: the technologies would advance, and only the local rules and practices would need to be adjusted to allow for full deployment. Delays in the diffusion of the politically most visible systems, small rooftop solar units, did seem to be caused by problems of that sort. In the 2000s, these technologies were thought of as primed to supply half of the state’s needed new renewable energy supply, and the state had a goal of covering at least one million roofs with solar—up from just twenty thousand when the law was passed in 2006.28 But the installation program went slower than expected, and on taking office as governor in 2011, Jerry Brown created a coordinating body to investigate the barriers to the deployment of rooftop technology and codify the best practices in a series of guidebooks—one every two years.29 The task force first focused on streamlining building codes and permit applications, which were widely known as impediments to implementation.30 As often happens when government concentrates on implementation in the traditional sense, Brown’s task force emphasized the need for coordination of agencies and the clearing of regulatory bottlenecks.

But it soon became clear that putting vast quantities of renewables into service wouldn’t just be a matter of regulatory streamlining. Because solar and wind outputs vary with the weather (frequently in somewhat unpredictable ways), they change the properties of the grid, and such changes would in turn require modifications in grid planning, investment, and operation. Moreover, because the grid itself is an interconnected network of regional and local nodes—with electricity on an alternating current flowing in all directions, in ways that depend on frequency and voltage at each node—these shifts would necessitate contextualization on many different levels. By the second installment of the guidebook, in 2014, the process was focused on how local experiences in deploying rooftop solar could guide changes in top-level rules such as interconnection procedures, all with the goal of advancing implementation in the traditional sense.31

Ultimately, the diffusion of rooftop technology faced a bigger barrier: cost, and with it scalability. Because such installations were small and decentralized, they remained three to four times more costly per volume of electricity generated than larger, grid-connected solar generators.32 Being small and decentralized also meant that the electronic gear that connected these generators to the local grid, along with the generator technologies themselves, were less sophisticated than on larger solar systems and less capable of adjusting in real time as rising supplies of solar power altered how electricity flowed on the grid. The more ambitious the state’s goals for the overall use of renewable energy, the more it would need to rely on larger solar and wind generators connected directly to the grid. And because California is much sunnier than it is windy, this requirement would entail big solar farms at the scale of fifty to a hundred megawatts (or larger), not a multitude of rooftop systems sized typically at tens of kilowatts.

Bigger systems pumping more power into the grid—serving physically distant customers—would have even larger effects on the grid itself. Rooftop solar installations are designed to be passive and fault tolerant. They trip off if they sense problems on the grid, and that interruption is (for the most part) not itself a further disturbance. To the local low-voltage grid, a house with a rooftop system therefore “looks” mostly like another house—albeit one with an unusual net consumption of electricity on sunny days—and it is invisible to the high-voltage grid for longer-distance power transmission, to which it has only an indirect connection via the low-voltage network.33 Changes in the output of large grid-connected solar generators, on the other hand, interact in consequential ways with the grid to which they are interconnected; the bigger the system, as a general rule, the more intense the interaction effects that extend fully across the whole high-voltage, long-distance grid. Sophisticated technology is required to ensure that the large power source reacts appropriately to changes in the grid, and especially that these reactions do not trigger additional disruptions. As sources of this type rise in their share of the overall electricity production, the practice of power generation thus shades into grid management.

The grid management problem becomes vastly more complicated than any prior experience managing other grids if the large source of power is renewable.34 When power generators can be matched in real time and centrally with power usage—as happens when power is provided mainly by fossil fuel or hydroelectric generators, whose output can easily be ramped up or down to meet the expected and real load—it is relatively straightforward to plan a grid system that minimizes congestion on the key power lines and transformers. There are uncertainties, to be sure. For example, it is unknown when a fossil-fueled power plant might trip off-line due to some rare maintenance problem. Grid operators would plan for these uncertainties with a series of simulation models that would mimic the behavior of the grid and contingency plans. If a power plant tripped off unexpectedly, extra units would be sitting idle and ready to ramp up—with the size and location of those backup units dictated by the simulation models.35 Because these models simulate grids whose operations are highly familiar, they are relatively easy to calibrate with real-world experience. In turn, these models informed and were informed by investments in the grid, such as new power lines, transformers, and sensors that all allow for a more stable flow and the reliable management of the grid. Adding renewables meant output was much less predictable, making simulation and planning for failures harder. Worse, variability in renewable supply is highly correlated among sources: the output of nearly all conventional solar generators peaks in the early afternoon, for instance. The problem of managing large, decentralized, and variable power sources on a single, highly integrated grid are compounded by the fact that most of those solar generators are unavailable when they are needed most to meet the demand for electricity—such as when the sun sets or a whole region is beset with cloudiness.

By the late 2000s, academics had begun to predict the huge challenges that such imbalances in supply and demand would pose for the grid.36 Early in the day, before the sun rose overhead, there would be a high demand for nonrenewable energy, such as from fossil fuel and nuclear plants. Over the day, that demand would plummet as the solar supply increased and other plants could be ramped down. In turn, demand for nonrenewable energy would rise again in the late afternoon when the sun would start to set, solar generators would taper off, and people still needed lots of electricity. The daily supply curve of nonrenewable generators (the dispatchable generators that grid operators could turn on and off to fill in gaps in the solar output) resembled the bottom half of the profile of a duck in flight, with a small, fat tail in the early morning, a deep midday sag as the belly, and a neck that rose steeply into the evening as lights and air conditioners came on, only to taper off late at night, forming the duck’s head and bill, as people went to sleep and the total power demand abated.

California’s grid operators knew about this “duck curve” problem, but in the 2010s, as the state began a rapid shift to renewables, they assumed that the biggest challenges in ramping power up and down lay far in the future—in the 2020s and beyond. They assumed that the variability in solar output could be managed readily by adjusting other generators on the grid to fill in supply gaps, and convincing Californians to adjust their demand during the day. They also assumed, as a last resort, that they could curtail any oversupply of power that remained.37

To varying degrees, all of these assumptions have proved wrong. The duck curve arrived much sooner than predicted, and managing it has proved harder than assumed in the models and policy plans. These failures to anticipate the speed at which the duck curve appeared also meant that investment decisions for new power lines, storage systems, and other elements of the network hardware that could keep the grid stable lagged behind need. Those investment decisions were, in most cases, the joint product of private investors (e.g., electric utilities or independent power producers) and the two regulators that oversaw the California electric power system: the California Independent System Operator (CAISO), which oversees and operates the high-voltage grid, and CPUC, which regulates the utilities and most of the lower-voltage systems.38

The faster the state deployed renewables, the harder it became to predict those realities—yet politically and economically, it remained vitally important for the grid to maintain reliability while integrating renewables at an acceptable cost. By 2017 or so, the imbalance problems were becoming urgent. Curtailment started rising sharply, and that waste implied economic loss; worries about the reliability of the grid rose as well.39 With the rapid deployment of large, grid-connected renewables, the state was in uncharted—indeed in some ways unchartable—waters in integrating these new power sources into the grid.

Again politics constrained the search for solutions, ruling out the strategy that most engineers favored: enlarging the grid to make it easier to move power as needed, clearing congestion during periods of oversupply, and bringing in new supplies during periods of scarcity. The political problems included siting because new lines were challenging to build. Just one new power line to connect solar generators in the southeastern California desert to San Diego had required nearly a decade to site, approve, and build.40 Worries about the visible blight from power lines (and fears, unfounded in science, that power lines caused cancer) were the source of enormous delays; tamping them down by moving lines underground would explode the cost tenfold. Moreover, the political coalition that helped cement support for California’s shift to renewables included organized labor; California unions liked in-state renewable power because it generated local jobs, and they opposed grid expansions that, for example, would have made it easier to import more wind power from Idaho and Wyoming.

Developments in the battery industry—then and now advancing at a pace even faster than photovoltaics—offered a partial solution to the integration problem. In principle, congested power lines and transformers could be utilized more fully if batteries stored the excess power and reinjected that energy later. This could be particularly important for places in California where congestion was created by excess power that was useless when generated by the midafternoon sun, but extremely valuable just a few hours later. Batteries, along with high-power electronics, could help dampen variations in voltage and frequency, making it easier to manage the grid reliably even as it shifted to renewables. As batteries got cheap enough, the thinking went, it would become less costly overall to upgrade the grid with batteries than to build extra power lines and transformers, and overbuild solar generators whose output would be curtailed during periods of congestion. Politically, this approach would also lower the risk that jobs building solar and wind generators would flow to other states. As many of the world’s leaders in advanced batteries, power electronics, and grid controls were based in California—for instance, Tesla—a battery solution could generate economic advantages within the state as well.

While the logic was straightforward, actually using batteries to make the grid more capable of integrating renewables depended entirely on the local context—the moment-by-moment types of congestion experienced on each part of the grid. Translating the rapidly strengthening political coalition in Sacramento and Silicon Valley that favored big deployments of batteries as industrial policy into workable solutions for the grid depended on extensive local planning informed by detailed, current information. Provisional, top-level goals for deploying renewables and batteries could still be set using formal models that simulated the grid. But achieving those goals—and setting new ones that better reflected actual operating experience—required regulators and the deployers of batteries to pay close as well as constant attention to the lessons of practice.

TWO WAVES OF BATTERY DEPLOYMENT

The big push to utilize batteries across the grid began in the 2010s. The first wave of efforts focused on inducing the procurement of the emerging technology by utilities—encouraging them to experiment with the installation of many different kinds of devices in many different settings, thereby driving down the costs and learning the value of various uses. That uptake of devices has been faster than expected, and efforts, still in the early stages, have shifted in a second wave to creating a regulatory framework that encourages a more efficient deployment of batteries yet safeguards the integrity of the network.

The instrument for the first wave had its origins in a 2010 law mandating CPUC to create an energy storage program to support the integration of renewables.41 The law, only a few pages long, set no targets. Instead it outlined a process by which CPUC would develop statewide goals through local planning. Each utility was required to procure storage systems and document how these storage devices functioned when connected to the grid.42 This kind of technology forcing under uncertainty recalls CARB’s transformation of the California vehicle market in the 1990s, discussed in chapter 4. The big difference is that in the case of CARB, vehicles did not interact with each other or the road network (apart from the need for charging stations), and thus CARB could concentrate more exclusively on the technological frontier and maximizing the deployment of devices on the road. In the CPUC battery case, by contrast, the storage devices and grids very much interact with their context. More devices, alone, would not cause a technological revolution.

Like CARB, CPUC began with an exploratory evaluation of possible uses, focusing on storage systems generally, not just the dominant lithium chemistry. Then (and now) solid-state lithium batteries were thought to be the technological winner, but better rivals might also thrive if given a chance, such as flow batteries that pumped liquid electrolytes through the device. From there, again like CARB, it set a provisional goal (a total procurement of 1,325 megawatts by 2024). That top-level goal included planning targets for each of the state’s three utilities, and subtargets for the portion of energy storage systems that each utility would deploy on the medium-voltage transmission system, lower-voltage distribution system, and at customer sites.43 This approach was designed to ensure maximum efforts at contextualization: each utility would be required to identify novel projects with many different use cases, technologies, and voltages.44 To underscore the centrality of novelty to the program, CPUC also expressly prohibited the inclusion of any projects that used the one electricity storage technology that was already technologically mature: large pumped hydro.45 In addition, CPUC required that the utilities contract at least half of their battery projects from independent suppliers. CPUC was creating an industry of the future, and feared that industries of the past (regulated utilities) would not be creative enough to understand and demonstrate creative new battery applications—especially where success might eat away at the traditional regulated utility model.46 A challenge in creating these industries of the future, of course, was that they would be deploying devices on grids controlled by the incumbents.

CPUC planned procurement rounds every two years, starting in 2014, so that the rules and choices in each round could be updated with local information learned through early procurement. In parallel, CPUC required a comprehensive evaluation of its whole battery storage program every three years, starting in 2016, so that “as more experience is gained … lessons can be applied.”47 Just as with the Montreal Protocol’s TOCs that evaluated exemptions for critical uses of ODS, discussed in chapter 2, CPUC created a compliance safety valve: a utility could delay meeting up to 80 percent of its planning target by showing that under its current circumstances, the target was for the moment infeasible.48

As CPUC turned from planning to implementation, information from the local levels often flowed in much faster than CPUC’s carefully planned biennial cycles of updates. For example, in just a year—from 2013, when CPUC set the first quotas, to 2014, when it approved the utilities’ first deployments—it became clear from the slate of proposed projects that a new procedure was needed for compensating utilities for the loss of electricity sales from power that was shunted into battery projects.49 After 2014, as actual procurement advanced, CPUC also learned that the expectations about the likely sites for battery deployment were wrong. At the start of the program, most experts thought that storage capacity would chiefly be deployed on the parts of the grid under direct utility control, including the higher-voltage systems, because that is where electricity flows are greatest and thus the gains from control over these grid-connected system are plausibly the largest too. Instead, many batteries were deployed at customer sites, in part because customers and the equipment vendors operating storage on their behalf quickly learned how to use these battery systems to cut costs as well as improve power reliability. In parallel with CPUC’s efforts to advance grid-connected batteries, large power users on their own were buying “behind-the-meter” battery systems that would help them cut electricity costs and make their electric supplies more reliable.

Because the behind-the-meter market was growing so much faster than expected, before it made any decisions about the 2016 procurement, CPUC adopted new accounting methods to allow utilities a 200 percent “ceiling” for overcompliance in deploying projects at customer sites, with overcompliance usable to offset procurement quotas elsewhere on their systems.50 (As in the sulfur case in chapter 4, what looks on the surface like a market—the trading of compliance quotas—is in fact active, conjoint learning by the regulator and regulated about how to set the rules, and the “trading” that follows is relatively inconsequential optimization within those rules.) At the same time, CPUC adjusted a state subsidy scheme that helped pay the cost of battery installations so that in providing electricity to users that were competing with utilities, known as Community Choice Aggregators, new actors could be engaged more fully in the battery revolution.51 Once again, the industrial policy of CPUC put it at odds with the utilities, for Community Choice Aggregators were designed to allow communities to take more control over their own procurement of energy services and shrink the revenues that might flow through utilities.

The second wave of contextualization began in 2018. With the conclusion of the second biennial procurement, the state was exceeding its deployment goals for batteries, yet still had not learned much about how batteries would transform grid operations. Under the procurement program, the utilities were required to operate the new equipment in a program-specific experimental zone. Neither the regulator nor the investing utilities knew how procured devices could be put to efficient economic use while also meeting the requirements of the ongoing reliability of the grid. A combination of regulatory mandates and guarantees of fair return on investments allowed for the necessary experimentation while limiting the scale of deployments so that adverse effects on the network would be safely contained. To scale deployment beyond this sheltered zone would require new rules to allow for the efficient use of the technology while continuing to assure network reliability.

The central issue in devising these rules is a shared language of evaluation; as explored in chapter 3, experimentalism requires the ability of those running the experiments and reviewers to understand what is being learned. The utilities that manage their local and regional grids under the supervision of the CPUC, the manager of the statewide high-voltage grid (CAISO), and the private companies that deployed at least half the battery systems all needed the capacity to understand how battery systems might create value and affect the reliability of the grid, and they needed a common language to communicate that understanding.

This language took two forms. One, focused on the benefits from deployment, is “value stacking.” Batteries can do a lot more than just store energy. If coupled to advanced software that can communicate with the grid and power users, a battery storage system can perform many other functions or services, from arbitraging time-sensitive price differentials to stabilizing grid voltage and frequency. Some of these services can be supplied simultaneously, while others can only be provided in sequence. In practice, it is as if no single service requires a substantial fraction of a storage system’s available resource, and no single service is profitable enough to amortize the cost of investment. The economic use of storage therefore requires rules that allow the combining or stacking of different uses, with the proviso that the combination of committed services not put the stability of the network at risk.

What makes value stacking especially difficult for regulators and storage users alike is that the performance of storage devices is highly context specific, and the exact value from combinations of services was unknowable without experience. As a leading research institute put it in 2015, “The range in value that energy storage and other distributed energy resources can deliver to all stakeholders varies dramatically depending on hundreds of variables. These variables are specific to the location where resources are deployed, making generic approximations of value difficult.”52 It is possible to distinguish adverse uses conceptually (where a commitment to one service jeopardizes serving another) from compatible ones. But the actual performance of particular devices, and hence their value (and risks) to the grid, can be known only through intense and ongoing observation in local contexts.53

The other shared language concerns the reliability of the grid and has already been discussed above: grid modeling. Grid operators use a suite of power flow models that simulate how every component (node) of a grid reacts to changes in the behavior of any of the others. Each utility or other local grid operator must maintain its own power flow models that simulate all the nodes under its control, customized to its local setting (with the help of the model software provider) and periodically calibrated to the actual behavior of each grid. These local models are regularly compared to one another under the supervision of the regulators to produce a single, contemporary, system-wide set of best practices for grid modeling even as each grid operator maintains its own grid-specific configuration.

The practice is dynamic and interactive, and begins before any large new devices are connected to a utility’s grid, including battery systems. The utility uses simulations of the performance of a proposed device to assess the values of all the services that the device might provide. These simulations of power flow on the grid also identify places where the device might create conflicts or congestion that might require grid upgrades or adjustments to the design of the device itself. Through an iterative process of project proposal, simulation of power flow on the grid, adjustment, and then resimulation, a viable project emerges. With value stacking, the economic value of the project can be assessed. With power flow models, the risks to the reliability of the grid can be assessed. The convergence is formalized in an interconnect agreement between the operator of the battery project and the grid operator, stating in effect what the grid in its various layers and nodes can expect from the project under varying conditions, and constraining how the device behaves on the grid.54 The power flow calculations made by the operator of the grid—in California, that’s the utility for medium- and low-voltage grids, and CAISO for the high-voltage grid—are the ultimate arbiter in this process. But the methods and assumptions are transparent so that other third-party contractors can run their own simulators and developers can design projects without always being under the utility’s thumb. Along the way, the regulator could check these calculations as it makes decisions such as whether to approve the inclusion of a utility storage device in a utility’s rate base or the cost of storage services that a utility purchases from an independent supplier.55 This is why the concept of value stacking was so important to accelerating investment in battery systems: most of the services provided by batteries have never been valued properly in markets and thus regulators needed a way to determine whether the full range of benefits from deploying batteries could be aligned with the prudent bearing of costs. That interconnect agreement is in turn updated as needed, if only to alert the regulator and grid operator to new power service commitments, or variations in the actual performance of a device, that might eventually interfere with (or complement) other devices or the grid’s reliability.56

Introducing radically new technologies raises huge challenges for modeling because it is hard to identify the right functional form and assumptions to govern how novel devices are represented. While analogous cases can be identified, the problem is emblematic of the uncertainty discussed in chapter 3—uncertainty that is irreducible in the absence of real-world experience in context.57 One place to observe this readily is at CAISO, the high-voltage grid operator, because it makes its deliberations around modeling and the procedures for setting interconnect agreements so transparent. As the first battery projects were advancing, each required the modeling of how the battery would affect the surrounding grid and an interconnect agreement. For the high-voltage grid, this meant that the regulator (CAISO) needed to resolve novel questions such as whether batteries were generators or sources of demand, the traditional categories for grid-connected resources.58 The answer was both and neither, so CAISO created a new category of grid-connected resources, published the code it would use in modeling these resources, and outlined an approach to modeling and interconnect agreements that would be good enough until practical experience might reveal better approaches.59 As each cluster of projects in California advanced—each needing modeling and interconnect agreements—grid operators also watched as a few other jurisdictions around the world ran experiments in allowing battery interconnection. Until the California market surged in size, making it today the biggest deployer of grid-connected batteries, the largest battery experiment was in Australia, where a large system was installed at the interconnection of two grids. Comparing model-based studies that tried to predict how that Australian battery would operate with what was learned when the battery actually functioned helped reveal how little was predictable reliably in the absence of real-world experiments in system context.60

A fundamental task for regulation under uncertainty in this setting is to strike a balance. On the one hand, the regulator is advancing a form of industrial policy; it is pushing for the emergence of a nascent industry and thus wants as much experimentation as possible.61 On the other hand, it must ensure that the experimentation needed to encourage innovation and deployment does not degrade network reliability or impose unacceptable costs on ratepayers. Thus CPUC’s 2018 provisional rules for the deployment of storage authorized utilities to provide stacked services in any combination they think useful, subject to the requirement (in rule 6 of CPUC’s “Decision on Multiple Use Application Issues”) that “a single storage resource must not enter into two or more reliability service obligation(s) such that the performance of one obligation renders the resource from being unable to perform the other obligation(s).”62 In effect, rule 6 is a kind of guardrail. Deployers of battery systems are authorized to experiment with value stacking and have strong incentives to find ways to maximize the total value, so long as their experimentation does not involve multiple claims on services that could be needed when the grid is under stress. Neither CPUC nor the battery operator can predict the exact grid configurations or conditions that might affect how batteries perform in real time, but they can control how reliability-related services are offered to the marketplace and contracted. After that, real-world observation is essential to looking, in context, at whether the regulator has struck a balance that is too conservative; if so, the guardrail would need changing.63

Periodically, the real-world offers extreme experiences that can test experimental deployments at limits that would be too dangerous to create under normal circumstances and too unusual to simulate reliably. Large-scale grid outages offer a unique opportunity for that kind of testing; at this writing, the most important of these were two days of statewide power shortages in August 2020, a time when hundreds of megawatts of battery projects had been deployed—each operating under interconnection agreements with grid operators that were designed with protective guardrails yet their actual operations would help reveal whether batteries in the context of grid duress could do a lot more for grid reliability than had been allowed. CAISO, an operator of the state high-voltage grid, ran an interagency process after those outages that mapped power flow models to the actual performance of all devices on the grid—looking in particular at whether batteries could be operated in different ways when the context on the grid required more resources. What CAISO found was that the guardrails were too conservative and led to the underutilization of battery resources in practice.64 Put differently, the rules had run out. Prior to such real-world experiences it was impossible to set an effective rule that could anticipate all circumstances. Supervision using common languages of assessment, rather than rules, was required—and when the opportunity of extreme conditions arose, that supervision made it possible to write new rules and set the context for a new wave of experimentation in real-world contexts.65

In sum, the problem for the regulator and investor in regulated storage is to arrive, initially, at an approximate estimate of the performance of the device that is detailed and reliable enough to warrant a decision on the soundness of the investment, and then correct an understanding of the network with regard to that type of device as local experience accumulates. The common languages of evaluation—value stacking and power flow models that simulate reliability—made this possible. Guardrails such as rule 6 made it safe. This is, again, a paradigmatic case of experimentalist contextualization: local adaptation is necessary (because performance in place can’t be adequately estimated without place-based data), but the stability and adaptability of the overarching system depends on continuing the central adjustment to these ground-level changes. Through this process, regulators oversaw a massive deployment of batteries on the California grid—deployments that in just a few years, rose from essentially zero to the largest in the world.

The outcome of this second wave of efforts at contextualization, still underway, is hard to predict.66 At one extreme, the deployment of storage devices around the grid could unfold in ways that radically reduce the role of the high-voltage grid itself—an extreme vision that is not looking quite so extreme in light of technological and political shifts. Again, the shock of events has altered the politics of experimentation—in this case, the shock created by evidence that power lines (of many voltages) have ignited deadly wildfires around the state. One response has been regulatory incentives for more decentralization. As a follow-on to batteries, California regulators are advancing experiments for clusters of decentralized energy systems.67 This exploration may or may not produce an additional set of markets for novel, stackable services. But to go by experience so far, it will almost surely clear the way for yet more practical investigation, under regulatory auspices, of more effective power management though more decentralized control, and lesser roles for classic regulated utilities and the high-voltage grid. In 2021, California regulators and policy makers launched, for example, a new wave of experimental investments into microgrids that will link together local battery and renewable energy supplies in ways that could make local electric service more reliable.68 The challenges for CPUC are identical to those it faced at the onset of the battery program: a nascent industry that in principle, could yield large social value, but whose best configurations and locations for deployment are unknowable without learning through deployment about the context.69

Finally, the contextualization of renewables and storage devices on the California power grid has had large political impacts that are reinforcing experimentation. It has fostered, despite the continuing uncertainty, the plasticity of political support for innovation in energy supply and management. Constant change, not stability, creates a sense of possibility, and with it new alliances built around strengthening the political power of novel industries that could not have emerged without the disruption created by new regulatory arrangements. As these novel industries get larger, they fund increasingly powerful lobbying groups that are highly active in state legislative and regulatory affairs—groups that are building alliances with other powerful political actors, such as those that back renewable energy and organized labor.70 Yet more disruption follows. Now the incumbents, too, are deploying high-profile projects specifically designed to demonstrate how batteries can achieve popular goals such as reducing the dependence on fossil fuels.71 Since 2010, with authorization from the state legislature, CPUC in effect has orchestrated an industrial policy to nurture an infant energy storage industry, with its financial center in Silicon Valley, and help firms identify and demonstrate growing markets for their products. As of today, that industry is at least an order of magnitude larger in sales as a direct result of CPUC-led procurement, and with that larger size comes more political power. Through disruption, the interests of all the major actors—including all the incumbents—have changed. While disruption was easy to talk about conceptually, what made it a reality was a regulatory system that learned about technology deployment in context. When people speak of a future grid transformed by decentralized electronic devices, including batteries, they tend to focus a lot on how the performance of those systems has improved. But performance has been a function of deployment—and that has moved at the pace that regulators and deployers have learned about context.

Sustainability and Development in the Amazon

For a final example of contextualization, we consider efforts to combat deforestation in the Amazon.

Deforestation in the tropics is a menace to the planet. Among other things, it disrupts the climate, destroys irreplaceable habitats, and displaces communities of Indigenous peoples and other forest dwellers. Tropical deforestation in the Amazon, Indonesia, Malaysia, and equatorial Africa alone contributes about 10 percent annually of the total carbon emissions, making it second only to vehicular pollution as a cause of climate change.72 Land use protections have a greater effect here than anywhere else. Where Ireland contextualizes the meaning of “good water” in the WFD and California contextualizes the state regulation of novel energy storage devices for the local management of renewable sources on the grid, Brazil is contextualizing the law and regulation of public and private property to give concrete meaning to sustainability as the reconciliation of economic activity and respect for environmental values in the tropics. There is no place where competing ideas of development and its relation to correspondingly diverse concepts of environmental protection are more openly as well as urgently contested among grassroots movements, local and national governments, and national and international NGOs than in the Amazon.73

We focus on Brazil because it has embraced the most comprehensive range of measures for protecting tropical forests. Enforcement in Brazil, at times aggressively rigorous, has been the most effective among countries with similar protections, and the resulting successes and failures say the most about what does and does not work. Indeed, Brazil has been called a “laboratory of governance innovation” in these matters.74 In a first wave of reform, from roughly the end of the military dictatorship in 1985 to the early 1990s, innovations in the use of public land allowed forest-dwelling communities and Indigenous peoples to continue traditional ways that were presumed to have been compatible with the flourishing of the forest over long periods of time. In a second wave, during the two terms of the leftist president Luiz Inácio da Silva, or Lula for short, from 2002 to 2010, the measures directed at traditional forest dwellers were extended, but there was a new focus on commercial land use. Brazil combined the regulation of supply chains in soy and beef with rural land registration, backed up by the application of penalty defaults such as exclusion from subsidized credit or important markets, to induce ranchers and farmers who settled in the Amazon in recent decades to adopt sustainable practices. These measures resulted as intended in a dramatic drop in emissions. But the limits of the reforms with respect to both traditional and commercial land users, exacerbated by political changes, have become clear in recent years and are reflected in increased rates of deforestation. These limits underscore again, and boldly, the zigzag course of these contextualizing reforms, typically over decades, and the need for a background consensus on general goals, no matter how thin, to learn from the failures of partial successes.

The distinctiveness of its reforms notwithstanding, the arc of Brazil’s development of the Amazon exemplifies broader tendencies. In treating the tropical forest as unspoiled land to settle and exploit—thus provoking social conflicts and trying to limit their effects—Brazil’s experience in the Amazon strongly resembles that of other countries in the forested tropics.

Indonesia is a telling illustration. Brazil shares with Indonesia a colonial past that left a legacy of large estates and a plural legal system in which customary forms of property survive in the shadows of formal, modern law. Both countries have a tradition, anchored in fundamental statutes and honored mostly in the breach, of social property, respecting the place of smallholders in society.75 Both had a developmental dictatorship in the second half of the twentieth century—the military government in Brazil (1964–85) and Suharto’s “New Order” in Indonesia (1966–98)—that brought migrants and corporations into the forest to settle, log, mine, and plant, often in violent collision with each other and the customary communities long there, and all in flagrant disregard of the environment. In both countries, international outrage at deforestation and domestic protest against abuses of power gave rise to movements to make development sustainable. In the end, both movements favored the “greening” of large property, provoking a call for enhanced support for smallholders along with the continuing, place-based contextualization of regulations and property claims.76 In this larger perspective, Brazil exemplifies the late twentieth-century paroxysm of postcolonial development that has turned much of the tropical forest into a dangerous frontier.

Politics and penalty defaults play a more salient role in our discussion of the Amazon than they have in our other case studies, largely because they more sharply constrain the definition of problems and specification of solutions in the context of deforestation. In the case of commercial land use, penalty defaults are imposed abruptly, almost as they are announced; reforms take shape as the producers, determined to escape exclusion from the market, negotiate what compliance with the requirements of sustainability will concretely mean. We thus concentrate less on a peer review of local experience and other operational routines of experimentalist governance institutions, and more on the preliminary and far less orderly learning that occurs among diverse actors at many levels. Throughout this process, the substance of reform—what sustainability means, practically, in various settings—is contextualized through exposure to contrasting views, first in the rough cast of social movements or the projects of local elites and their allies, and then in the forge of high politics.

PROTECTING FOREST DWELLERS

The military government that came to power in Brazil in 1964 aimed to bolster national security, accelerate the growth of the domestic economy, and improve the lot of land-starved peasants in the south and northeast of the country. Settling the Amazon contributed to achieving all three goals. Settlements, together with military reserves, would obstruct foreign intrusions through the vast, unguarded frontier and limit the use of the territory as a staging ground for guerrilla operations. The slogan was integrar para não entregar, or use it or lose it. Given the theories of economic development of the day, the benefits of exploiting the vast store of raw materials in the Amazon were as self-evident as the need for simultaneous investments in national industry to process such inputs. The unsentimental message to investors was chega de lendas, vamos faturar! (enough of the legends, let’s cash in). Settlement promised to address populist demands for justice in the countryside without the inconvenience of land redistribution. From this perspective, the Amazon was terra sem homens para homens sem terra—a land without people for people without land.77

The government’s initial plan was to attract a hundred thousand families to settle in a fishbone pattern along the planned Trans-Amazonian Highway, transecting the region from east to west, and the BR-163 highway, running north to south. Settlements were planned in regular intervals along the projected roads: a small village every 10 kilometers, an intermediate-sized rurópolis every 50 kilometers, and a large city every 250 kilometers. Colonists were to get a homestead, support from agricultural extension services, and access to schools and churches for their families.78

But this grand vision proved a bust. The official program drew only thirteen thousand families, and next to none of the promised infrastructure and services were supplied even for them. Only one rural city was built, at the intersection of the Trans-Amazonian and the BR-163. By the mid-1970s, the government changed tack, and encouraged large, corporate investment in logging, mining, and agriculture on a scale to match the national ambitions for industry.

The succession of measures produced a maelstrom of opportunities that sucked capital and people into the Amazon from the 1970s on. Migrants who started ranching went broke, and found work as loggers or miners, hoping to try homesteading again; ranchers occupied the land abandoned by the homesteaders or cleared by the loggers, and converted it to pasture. In an inflation-prone country, investors from the rich south, constantly looking to protect their wealth with holdings in land, financed settlement in the Amazon when the government hesitated. Property titles for the newly claimed territory were generated as needed. Brazil, like other Latin American countries, inherited provisions from Roman law that allowed for the acquisition of property by possession (posse) or the homesteading improvement of public land (usucaptio). Property claims based on these or other grounds could be certified by the federal and state governments as well as various entities administering settlement programs. Absent a unified, national land registry, certifications conflicted, and land claimed in a given county could exceed its actual territory by half. Taking advantage of this system was a profession that specialized in the production of artfully “aged” property titles that pass for real on first inspection. By the late twentieth century, the end result was to turn the Amazon into a new frontier, with disastrous consequences for the environment.

As all of this was unfolding, the rubber tappers of the northwestern state of Acre, at the border of Peru and Bolivia, were at the forefront of grassroots protests against the new settlers, and these protests grew into a movement that culminated in the creation of a new kind of reserve on public land at the end of the 1980s. Acre had been a center of the rubber boom that drew waves of migrants from the northeast at the turn of the twentieth century. As in many labor-scarce areas dependent on commercial agriculture, the rubber estate owners in Acre immobilized the migrants they recruited by debt peonage, backed up by violent thuggery. For decades rubber tappers resisted this bondage, overtly with periodic strikes and covertly by diversifying their activities in the forests—collecting Brazil nuts, fishing, hunting, and growing food crops for domestic consumption—in order to reduce their dependence on the company store and rubber tapping in general. The industry saw a brief revival during World War II, but as it faded into international irrelevance in the 1960s and 1970s, this diversification became a survival strategy—only to collide with the huge inrush of new landowners, attracted by the state’s guarantee of favorable conditions, which restricted the freedom to range in the forest on which the independence of the rubber tappers had come to depend.79

Under the charismatic leadership of Chico Mendes in the 1980s, the rubber tappers revived the earlier strike tradition and turned it to the defense of the forest as well as economic survival. The characteristic protest was the empate—a pacific standoff in which the rubber tappers literally hugged trees to halt gangs with chain saws. The empates went hand in hand with the formation of a national rubber tappers union and a determined hunt for an alliance with the domestic Indigenous movement, itself gathering strength from a worldwide assertion of the rights of First Peoples, and like the rubber tappers, in search of new models of collective or joint landownership.

Early efforts by the federal government to address the problem—by settling rubber tappers on homesteads under the colonization program—only made things worse. The exclusive ownership of a small plot proved as confining to foragers as the pressure of aggressive neighbors, with the additional complication that sales to outsiders by individual members of a tight-knit settlement, in which much land was implicitly shared, could jeopardize the integrity of a whole community.

A solution—the extractive reserve—emerged from discussions among rubber tappers, Indigenous groups, government officials, and domestic and northern NGOs. The idea, the culmination of the first wave of reform, was to lease public land to a community according to a scheme that balanced various interests. Twenty percent of the territory would be assigned to individual families to use as they liked. The remainder would be jointly controlled by the reserve’s member families, with access and proceeds divided according either to customary rules or new rules determined by a managing council representing the common interest. Because the reserve area was neither individually held nor collectively worked, it left room for the diversified and changing strategies of the forest dwellers, and because the land remained public, the community was not hostage to the possibility that individuals could sell to outsiders. The goal was to enable reserve inhabitants to continue traditional ways of life, with secure land tenure along with the benefit of such subsidies and technical assistance as might be needed to market their products or develop new ones consistent with traditional practices.

The new type of landholding marked a first, partial contextualization in Brazil of the ideas of sustainability and development. Against the prevailing “fortress” orthodoxy of northern NGOs, by which protection of vulnerable ecosystems required the nearly complete exclusion of humans, the creation of the reserves allowed that humans could coexist with tropical forests so long as their activity makes them natural stewards of sustainability.80 Arriving at this “socioenvironmental” or “neotraditional” understanding of permissible land use in the Amazon cemented the relation between “big” (global in its reach, and formal and explicit in its methods) and “small” conservation (local in scope, informed by individuals’ day-to-day choices, and often based on traditional or tacit knowledge invisible to outsiders).81

IMPOSING PENALTY DEFAULTS ON RANCHERS AND FARMERS

But even as forest-dwelling communities gradually came under the protection of this and similar arrangements, the pressure on the forest from ranching and farming increased, and with it the pressure for reforms specifically aimed at regulating these commercial activities. Extreme macroeconomic instability—including high inflation in the first half of the 1990s, a sharp revaluation of the currency to reduce it, and a recession ending in a full-blown debt crisis at the end of the decade—did not deter investment in larger and larger infrastructure projects. Hard times and the absence of alternatives made the Amazon a haven, and waves of settlers crowded in. At the same time, the Amazon, already connected through the export of spices and rubber to long-distance trade, was becoming ever more visibly part of the global economy, with predictable effects on the environment. The rate of deforestation, calculated annually on the basis of satellite imagery by the Brazilian Space Agency since the end of the 1980s, spiked in the mid-1990s, and then began to climb again, tracking, with a lag, the movements of the prices of beef and soy.82

The government responded to such alarms mostly through changes in the law that, unenforced, only signaled concern without requiring action. The fraction of land that rural property owners in the Amazon had to leave undeveloped was increased from 50 to 80 percent. Owners likewise were required to maintain protective buffers on riverbanks and steep slopes. Criminal sanctions could be imposed for violations of the environmental requirements; corrupt officials complicit in such violations were made subject to criminal sanctions as well. A new kind of conservation unit was created—in one variant, prohibiting development entirely such as in natural parks, and in another, allowing only sustainable uses. The most tangible response was the creation of additional extractive reserves on the lines of the model developed in Acre.

But in part because of muffled official responses to deforestation, northern NGOs redoubled their engagement in Brazil in these years. The Rio Earth Summit of 1992, organized by the United Nations, drew world attention to the threat to the Amazon and elsewhere, but familiar disagreements—over responsibility for the danger and financing of solutions—combined with the constraints of consensus decision-making disappointed northern hopes for a binding international regime to limit environmentally unsustainable logging. The World Wide Fund for Nature began to fill the institutional void in 1994 by bringing together forest owners, timber companies, forest dwellers, and environmental NGOs in the Forest Stewardship Council to oversee the development of a code of good practices for the entire global forest products supply chain. Certified adhesion to the code would distinguish good actors from bad ones and expose the latter to penalty defaults enforced by rich-country customers in international markets. Later, many similar efforts would follow to use supply chains to extend the reach of rich-country norms to sectors implicated in deforestation, such as palm oil and soy.

In Brazil, Greenpeace pursued the same goal more directly. Without first establishing a code of acceptable conduct, the organization simply identified illegally harvested timber and denounced the reputation-sensitive companies that exported products made from them. In the 1990s, the Greenpeace campaigner sneaking through the damaged forest to mark contraband logs with luminescent paint for tracking thus replaced the tree-hugging rubber tapper as the symbol of environmental activism.83 These efforts were the seeds of national supply chain agreements in soy and beef that over the following decade demonstrated the strengths and weaknesses of such supply chain regulation before they were exposed elsewhere.84

The presidential election of 2002 brought environmental issues to the fore in national politics, as Lula and his Workers’ Party finally triumphed after two other bids for office. Lula had been allied with Mendes and the rubber tappers since the 1980s. The one secretary in the Ministry of the Environment retained from the outgoing government was Mary Allegretti, who had worked closely with Mendes and the NGOs in developing the idea of extractive reserves. The new minister of the environment was Marina Silva, a close confidant of Mendes’s in the labor movement in Acre and then representing the state in the federal senate. The governor of Acre had likewise been a confidant of Mendes. The expectation was that the new government would shift from defensive measures assuaging foreign and domestic concerns about deforestation to an active defense of the environment and forest dwellers.85

But support for environmentalism was qualified. Like other “pink tide” leaders coming to power in Bolivia, Ecuador, and elsewhere at the time, Lula combined a concern for Indigenous peoples and their home ranges with the conviction that natural resources are the patrimony of the nation, to be exploited, to the extent economically feasible, for the benefit of all.86 To this view was added the qualification, especially pronounced in Lula’s case, that the proceeds of natural resource exploitation were to be primarily used for redistribution—to reduce poverty and compensate vulnerable groups for harms suffered, including harms connected to the extraction of resources—and only secondarily to alter the structure of production itself by investing in new industries or less harmful ways of organizing established ones. The costs to the defense of the environment along with the growth prospects of the economy as a whole would only become clear as the commodity boom that financed redistribution ran out and the political winds shifted abruptly.

A 2003 Brazilian Space Agency report of a spike in deforestation, back to the mid-1990s’ peak, triggered the government to create the Action Plan to Prevent and Control Deforestation in the Amazon. Initially the plan was for the period 2004–9. Through successive extensions, it remains, though much battered by recent governments, the framework for policy today. In an exceptional move for Brazil, the program reflected a whole-of-government approach: the plan was to be overseen by a council of thirteen ministers from key areas, and administered within the Casa Civil or Executive Office of the President (until 2013). This made possible the rapid and effective coordination of measures ranging from criminal prosecutions of corrupt environmental officials to credit restrictions imposed on producers in supply chains or located in municipalities implicated in deforestation. The focus at the beginning was on the “arc of fire” menacing the Amazon along its southeastern border and the disruptions caused by BR-163.

One of the principal goals was the extension of the various types of environmentally protected areas in the Amazon. At the urging of a grassroots network of small farmers’ and settlers’ organizations on the Trans-Amazonian Highway, including national and international NGOs—the same constellation of actors as in Acre, and like them, fusing big and small conservation—the government approved the creation of a 5.6-million-hectare mosaic of reserves, including Indigenous lands, in the Terra do Meio (land in the middle), between the Xingu and Iriri Rivers in Pará.87 By the end of Lula’s second government, in 2010, some 45 percent of the Amazon was under legal protection, half in Indigenous territories and half in conservation units created by the National System for Protected Areas in 2000.88 The internal governance of many of these areas remains in disarray, leaving them vulnerable to changing winds, but by international standards this has proved a remarkable effort to protect the environment.

The second major goal was immediate: the maximal enforcement of environmental requirements. In experimentalist governance, severe penalties such as market exclusion are typically imposed as a last resort—when an actor repeatedly proves unable or unwilling to meet a standard, despite support. Under the Action Plan to Prevent and Control Deforestation in the Amazon, the sequence was reversed: the penalty default of exclusion was imposed almost from the start, touching off a frantic search for feasible ways of meeting the regulatory requirements by the producers, the government at various levels, and NGOs, whose own, parallel efforts to control deforestation through campaigning led them to develop promising forms of compliance. The resulting innovations were then fitted together into the second set of contextualizing reforms, focused on the regulation of commercial activity in the Amazon.

The technological key to rigorous enforcement was the improved satellite surveillance of deforestation to permit near-real-time monitoring. The authorities were now able to intervene while illegal activity was in progress, and equipment and contraband could readily be connected to crime. The seizure of goods was significantly more effective as a penalty and deterrent than the imposition of fines, which were notoriously hard to collect. The improvement of the technology was accompanied by an improvement in the direction and coordination of interventions. Central to the enforcement efforts was the Ministério Público Federal, an elite core of prosecutors created by the constitution of 1988 as independent from all branches of government, and charged with defending citizens’ rights too “diffuse” or “collective” to be vindicated by the normal legal process.89 Acting together with their offices in the states, Ministério Público Federal prosecutors were especially effective in bringing charges against corrupt officials in the environmental administration, who were often shielded by local political interests.

The combination of real-time information and enhanced coordination produced explosive—and to all appearances, unplanned—results. The Federal Police swooped in to catch deforesters red-handed, confiscating vehicles, farm equipment, and thousands of cubic meters of illegal logs or herds of cattle. Ranchers and farmers burned police vehicles and administrative outposts in turn, while the federal authorities prolonged raids into “field” interventions, temporarily occupying hot spots. In 2008, the government focused its efforts on a list of thirty-six (soon to be fifty) “priority” target municipalities, chosen on the basis of cumulative deforestation and high rates of burning in the preceding three years, and together accounting for more than 40 percent of the forest loss in the Amazon at that time. All rural landholders within the boundaries of the target municipalities stood to lose access to subsidized agricultural credit and large customers as well.

It was in managing these sanctions, and especially in determining the conditions that priority municipalities would have to meet to be removed from the list, that the government’s efforts at enforcement fused with the approach to compliance that had emerged in negotiations among NGOs, large farmers, agribusiness, and state governments in campaigns for environmentally responsible agriculture in the same years.

INVENTING COMPLIANCE

The cascade of innovative improvisations began in 2006, when Greenpeace blockaded the harbor of Cargill’s soy-processing facility in Santarém. In anticipation of this, The Nature Conservancy had entered into discussions with Cargill about the possibility of building a system for monitoring suppliers’ environmental compliance, property by property.90 This proposal became the template for a soy moratorium, which endures to the present, under which suppliers that ceased deforesting by 2006 were permitted to continue to sell grain to large domestic and international traders, provided they put their property under regular environmental scrutiny and did not clear more land.

The soy moratorium drew the attention of Lucas do Rio Verde, a rich municipality in the Cerrado (the savanna of Mato Grosso), politically dominated by representatives of large agribusiness firms and soy farmers. To avoid guilt by association with deforestation efforts, the mayor proposed the creation of a land use map in the entire municipality and offered assistance to individual property holders to register under it. As bad behavior by one landholder could damage the reputation of all, environmental law became implicitly a collective responsibility. Landholders were also encouraged to take advantage of a state law allowing them to acknowledge deficits in land use, such as a failure to fully maintain the preserved areas required under the Forest Law, and pledge remedies within a grace period. Making such acknowledgments and pledges established eligibility for regulatory approval to engage in environmentally sensitive activities, including legal land clearing. When the federal agency responsible for environmental enforcement sued landholders in Lucas who, relying on the promised forbearance, had reported infractions, Mato Grosso created a comprehensive, georeferenced rural environmental cadastre for the state—the Cadastro Ambiental Rural (CAR)—and made registration a requirement for environmental permitting.

This agreement—the permission to continue trading in the market in return for a carefully monitored pledge to stop deforestation and eventually correct the environmental deficits—became key to compliance for the priority list of municipalities. To be removed from the list, a municipality had to reduce deforestation to less than forty square kilometers a year and demonstrate that 80 percent of the rural land within its boundaries was registered under CAR, which was quickly adopted in Pará and nationally in reform of the Forest Law in 2012. The aim was to forgo the punishment of wrongdoing in the past, and instead insist on continuing scrutiny that allows for the quick detection of wrongdoing and also increases the state’s ability to plan as well as oversee environmental measures. The result features the rudiments of an experimentalist monitoring regime. An additional benefit was to decouple decisions about regulatory permissions and creditworthiness from the vexed clarification of property titles. (The Terra Legal program of 2009 addressed this latter problem, primarily by helping smallholders acquire secure claims through adverse possession of public land. But implementation proved difficult because of the many administrative complications that had thwarted earlier property reforms.)91

The same principles were rapidly applied to regulate the supply chain for beef in two, overlapping agreements in 2009. The first was between the Ministério Público Federal office in Pará and meat-packers and slaughterhouses; the second was between the most important slaughterhouses and Greenpeace. The Ministério Público Federal in Pará sued ranchers for illegally deforesting and their slaughterhouse customers for purchasing contraband cattle; it also threatened to sue supermarkets, the slaughterhouses’ large retail customers. Besieged, the meat-packers in Pará entered into Terms of Adjustment of Conduct (Termos de ajustamento de conduta) with the Ministério Público Federal, suspending prosecution so long as the slaughterhouses did not purchase from noncompliant producers, and the ranchers registered their holdings under CAR. These agreements put meat-packers, in particular, under the continuing oversight of the Ministério Público Federal, and they soon spread to two-thirds of the federally inspected slaughterhouses in the Amazon. A few months after the Terms of Adjustment of Conduct were formalized, Greenpeace created an even more demanding “zero-deforestation” regime under which Brazil’s then four-largest meat-packers—Marfrig, Minerva, JBS, and Bertin—agreed to purchase exclusively from CAR-registered suppliers that stopped all—not just illegal—deforesting, with compliance monitored by a purpose-built tracking system based on Brazilian Space Agency data.92

The combined effect of government and NGO pressure on soy and beef supply chains, together with the imposition of penalty defaults on priority municipalities, was a drop of 70 percent in annual deforestation in the Amazon between 2004 and 2014. Even careful observers concluded that such a large decline showed that “it is possible to manage the advance of a vast agricultural frontier.”93 But given the shock to the world economy from the financial crisis starting in 2007, questions emerged about how much of the fall could be attributed to policy measures. Studies taking into account changes in prices, exchange rates, the size of herds, and the area under cultivation, among other factors, did find that the policies, in total, had a significant effect—a result also corroborated by studies of particular policies.

A comparison of listed and unlisted municipalities, for example, found that listing produced a significant decline in deforestation through monitoring and increased enforcement.94 A detailed investigation of the Terms of Adjustment of Conduct agreements in Pará showed that, after the agreements, buyers stopped dealing with ranchers engaged in deforestation, and ranchers with whom they continued to do business registered with CAR well before their neighbors outside the new supply chain.95 The restriction of subsidized credit, imposed only on farmers and ranchers unable to demonstrate the intention to comply with environmental regulations, led to a sizable drop in credits to large ranchers and was associated with a significant reduction in deforestation.96

In light of these results, there was new confidence that the state and civil society could work together to design and implement effective policies for reducing deforestation. In 2008, Norway sponsored the creation of the Amazon Fund to finance projects meeting agreed-on reduction targets, and interest was revived in a UN program, Reducing Emissions from Deforestation and Forest Degradation (REDD+), which pays developing countries for results in reducing or eliminating carbon emissions.97

THE LIMITS OF REFORM

But this confidence did not last. As it turned out, these years marked the apogee of hope in these kinds of no-nonsense measures, not the beginning of a new age of self-assured management of tropical forests. Deforestation rates in the Amazon have ticked upward since 2012, though they still remain well below the peaks of the early 2000s.98 Political changes have played an important role in this pattern. The current Brazilian president, Jair Bolsonaro, an apologist for the military dictatorship and its most unsparing ideas of developing the tropical forest, has all but declared open season on environmentalism and Indigenous peoples. Stark as they are, however, these attacks have only exacerbated and laid bare important flaws present all along in the regulation of commercial land use as well as the strategy of supporting forest dwellers by directing them to neotraditional activities.

For example, the long-term viability of traditional activities in the extractive reserves depends on both economic and social conditions: inhabitants must be able to make an acceptable living and also find such activities fulfilling. The more meager the returns to neotraditional activity relative to accessible alternatives, the greater the risk that it is discredited and regarded as inferior in comparison to more lucrative livings, making its economic shortcomings even less tolerable.

Such devaluation is precisely what unfolded over time. By the mid-1990s, only a few years after the new property form was legally formalized, there were already signs of such a spiral of disaffection in the reserves. A study of two distant reserves, for example—Uruará, a municipality on the Trans-Amazonian Highway in eastern Pará, and Xapuri, in Acre—found increasing cattle ranching by smallholders in both regions. Rubber tappers had traditionally kept some dairy cows as part of their subsistence diversification strategy, providing perhaps 4 percent of their annual income. But the rapid growth in the market for beef, better control of foot-and-mouth disease, and improvements in productivity through better pasture management made commercial ranching attractive to them for the first time. Beef prices were higher than those for annual crops and less volatile than the prices of perennials, and once food-and-mouth disease was brought under control, ranching was less exposed to pests than farming. Cattle also proved attractive because of their liquidity—they are marketable year-round—and as a store of wealth.99 By 2010, an anthropologist doing fieldwork in Acre could see a cowboy culture, replete with the insignias and rituals of flashy belt buckles, rodeos, and barbecues, literally erasing signs of the rubber tappers’ struggles as billboards displaying Mendes in the state capital were painted over with ads for hamburger stands and cell phone carriers from one week to the next.100 The conflicts have recently broken out into the open, as deforestation rates shot up in the Mendes reserve and a dissident group campaigns to secede.101

But if this was the outward sign of the cultural and economic failures of the neotraditional policy, the inward and ultimately more consequential marker of disarray was the breakdown in internal governance in the supposedly self-governing areas. Only half of the reserves and conservation units are estimated to have management plans directing development, and only 45 percent have a representative management council charged with making such plans.102 As collective projects and the solidarity underpinning them came under strain, the will and instruments to adapt as well as renew earlier commitments were failing too.

The limits of using penalty defaults to incentivize registration under CAR—and with it, a shift to sustainable practices—were also evident almost from the start. The agreements in Santarém and Lucas do Rio Verde establishing CAR involved local economies specialized in soy as well as the elites of large rural landowners and representatives of agribusiness already committed to shifting from extensive, low-productivity methods premised on cheap land to higher-productivity, intensive methods based on technology and skill. Compliance with the new requirements accelerated developments already underway, without an abrupt change of course. The same was true of Paragominas, a prosperous, consolidated farming and ranching municipality in the northeast of Pará, which became the first priority municipality to exit the list. Its success was credited to a local pact among the municipality and producers’ and civil society groups that created a forum and monitoring structure for the support of registering rural property.103 The state government, explicitly acknowledging that increased enforcement alone would not secure continuing compliance, embraced the idea of a “green municipality” and made the formation of such self-governance institutions a condition for leaving the priority list. For large, well-capitalized ranchers and farmers in these circumstances, compliance meant a kind of reprieve in which a prior record of deforestation or fraud no longer cast a shadow on current dealings, and public officials recognizing good faith commitments to sustainability became allies rather than opponents.104

Where the shift to intensive methods was incipient, though, as was typically the case with smallholders, and the political and social conflicts among groups in the municipality were pronounced, penalty defaults without complementary support measures could not compel compliance. São Félix do Xingu, a municipality at the eastern edge of the Terra do Meio, became a paradigmatic case. In the 1990s, its cattle herd grew vertiginously to become the largest in the country. This expansion drew large ranchers from the center-west and south, and some ten thousand poor smallholders (just under 90 percent of the holdings, accounting for 20 percent of the land), mostly from the northeast. The smallholders—often veterans of failed settlement programs or stints in mining—hoped to earn enough to expand, but were ready to sell their cleared land to larger owners in hard times. Property disputes were rampant. Of the hundred-largest suspect land claims in Pará during this period, thirty were in São Félix. Political power changed hands frequently; there was no recognized elite to direct responses to the municipal listing.105

The Nature Conservancy, which continued its series of exploratory interventions, recognized quickly that achieving compliant sustainability in São Félix required interventions to support smallholders.106 Together with the municipal government, it applied to the federal Low-Carbon Agriculture Program for training and subsidized finance for sustainable methods; a second initiative helped smallholders diversify production and reforest degraded areas by shifting into cacao agroforestry. There were also hopes of finance from the Reducing Emissions from Deforestation and Forest Degradation system linked to the Paris Agreement. But these and other measures failed, and São Félix stayed on the priority list.

Normal accidents of implementation were partly to blame. The federal government rejected the municipality’s reasonable contention that the forty-square-kilometer ceiling on annual deforestation should be adjusted to reflect São Félix’s extraordinary size (four times that of Paragominas). The NGOs focused disproportionately on the virtues and mechanics of participation, at the expense of discussions of plans and projects, heightening expectations only to quickly disappoint them. But the deeper cause was simply the gaping discrepancy between the smallholders’ needs and the available resources. State extension agents had no funds to go to the field, and there were far too few of them in any case. There were reportedly only three municipal tractors for use by all ten thousand smallholders in São Félix; three trucks mustered by the federal colonization agency were broken down in the garage. The state agricultural secretariat had no projects in São Félix.107 In this de facto disregard for smallholders, São Félix was not an exception. In Paragominas, which was richer, they were neglected in exactly the same ways.108

Left to their own devices, the smallholders struggled to adjust. Distressed owners sold out to better-placed neighbors. Land concentration increased. Charcoal production for domestic and commercial use and other rural trades was effectively prohibited. Real GDP per capita declined in São Félix following the 2008 listing, while it increased in Paragominas, Lucas, and Santarém, where less adjustment was needed and, given their wealth, the burden was easier to bear. The rates of deforestation in São Félix ticked upward to double those in Paragominas (but only a fraction of earlier peaks) as some chose flight forward, deeper into the forest, over adjustment in place or a retreat to wage labor.109 For this reason, smallholders’ role in deforestation in the Amazon generally increased in the following year.110

For a time, the same welfare measures by which Lula and his Workers’ Party (PT) won the allegiance of the poor in Brazil provided a buffer to smallholders, residents in the extractive reserves, and other groups in the Amazon against the harshest consequences of the new policies.111 Between 1996 and 2006, the share of various social welfare payments in the total family income on the Mendes reserve, for example, increased from 25 to 44 percent, while the share of earnings from neotraditional activities declined from 35 to 9 percent.112

Yet as commodity prices slumped, and budgets for welfare and other subsidies were cut, the buffer effects subsided and political support declined apace. In a pattern now familiar in the advanced democracies (and returning to a populist tradition well anchored in Argentina, Peru, and Brazil itself), Bolsonaro’s scornful rejection of the status quo revealed a discontent that in retrospect, seems to have been always and unmistakably there. In the Amazon, his election in 2018 amounted to a repudiation of recent policies. Leaders of the producer associations and a trade union operating inside the Mendes extractive reserve estimate that 70 percent of the inhabitants voted for Bolsonaro, who campaigned with the promise to “let the whole of the Chico Mendes Reserve be deforested to build ranches.” The leftist coalition that had governed Acre since 1999 and actively supported the sustainable, neotraditional forest economy was voted out.113 The state capital, where Silva got her political start as a council member, voted for Bolsonaro. Similar results were repeated across the Amazon.

REBUILDING ON THE SAME FOUNDATION

What are we to make of this mixed legacy? As we see it, the political collapse of deforestation efforts, difficult as it will be to reverse, overstates the failure of environmental reforms in the Amazon just as much as the drop in emissions following the priority listings overstates its successes. The forms of collaboration that allowed for rapid learning a decade ago remain models for further innovation, even if the social movements with which they were originally associated are now greatly weakened.

In an effort to repair the social fabric within the reserve areas, new projects are recruiting new generations. The aim is to reconstitute the organs of self-government—a precondition for further development.114 The grand principle of the reform of commercial land use—amnesty for past behavior in return for a monitored pledge to cease illegal activity to eventually correct environmental deficits—has proven its worth in the regulation of medium and large property.115 It failed where it was most needed and scarcely tried at all: in supporting smallholders to adjust to the demands of sustainability. Rededication to this principle, aimed squarely at the inclusion of smallholders, could consolidate the gains of the earlier reform. By expanding the gamut of economically and environmentally sustainable possibilities for small property, it could also suggest novel ways—as unanticipated today as the idea of the extractive reserve was three decades ago—to free forest dwellers from the constraints of neotraditionalism while continuing to use the public ownership of land as a safeguard for the protection of an invaluable and highly vulnerable biome.

Many of the background conditions for such a rededication are already in place, starting with smallholders’ disposition to give offers of adjustment support a chance. Smallholders at the forest margin know the fragility of the soil all too well and see the economic rationale for sustainability. In São Félix and elsewhere, the rule of thumb is that migrant settlers stay in place for ten or fifteen years before declining productivity forces them to move on. A decade ago in São Félix, moreover, many producers, large and small, were already beginning to adopt sustainable farming practices, improving pasture management, recuperating degraded areas, and stopping deforesting. About 75 percent of those interviewed in a small study said they made changes for fear of being fined otherwise, but about the same percent of the interviewees said they acted to improve returns in degraded areas where productivity was falling, or more directly to “learn from our mistakes.”116

This experience explains why the smallholders in São Félix were at first enthusiastic about the prospect of working with NGOs and extension agents on more ambitious projects, only souring on cooperation when nothing came of it. Practical experience of this kind informs daily decision-making, often in disregard of contrary political convictions or calculations of immediate advantage.117 For example, in Rondônia—a state with a weak enforcement regime where deforestation has long been tolerated—a recent study finds that despite the improbability of sanctions, smallholders prepare plans for land restoration, especially when supported by extension agents.118 The chances are good, in other words, that a substantial group of smallholders would take up a credible offer of adjustment support, and their initial participation could begin the progression from thin to thick consensus at the municipal level thwarted the first time around.

Nor do there seem to be significant technical barriers to raising the productivity of smallholder ranching while making it more sustainable. It should be possible, using available methods, to raise the stocking rate from one head of cattle per hectare to three, and decrease the time needed to fatten cattle for market by using better pasture grasses and management.119 Smallholder ranching can also be combined with cacao growing, as planting cacao helps restore degraded soils, and the income from the crop shelters the rancher against fluctuations in the price of beef. Pilot programs demonstrating the economic viability and environmental sustainability of this strategy are well established.120 There are underexploited possibilities for innovation in the use of traditional products such as latex.121 Brazil’s share of the world markets for agroforestry, aquaculture, and fishery products exported from the Amazon is surprisingly small, even when compared to competitors at the same level of economic development.122 For example, Brazil, which once dominated the export market for the eponymous nuts, now lags behind Bolivia, which has kept abreast of changing international food safety standards while producers in the Brazilian Amazon have not.123 There is, in sum, room to grow by adopting demanding but proven practices.

Nevertheless, should there be technical obstacles to less land-intensive, more sustainable practices, Brazil’s agricultural research agency, Embrapa, is well equipped to address them. Founded in the 1970s as a publicly owned corporation, Embrapa is widely admired for innovations that make the Cerrado arable, greatly increasing the yield of native pasture grasses by crossing them with African varieties, and innovations in sugarcane growing and processing crucial to the success of Brazil’s biofuels program.124 Adjusting the existing solutions to the needs of smallholders, or finding new ones—including, for instance, crop complements to ranching beyond cacao—seems well within its reach.

The chief obstacle to smallholder adjustment is the absence of extension services, as the experience of São Félix again makes clear. Brazil created a federal extension service when Embrapa was formed, but it fell victim to the austerity measures of the 1990s and the general skepticism of donor organizations such as the World Bank at the time that the state could effectively provide support services to private firms. Efforts to reconstitute extension under Lula led nowhere, and today such services, where they exist, are typically provided by chronically underfunded state agencies. From 2000 to 2016, the divorce between policies for family farms, on the one hand, and those for large landowners and agribusiness, on the other, was openly embodied in the coexistence of the Ministry of Agriculture, Livestock, and Food Supply, serving the latter, and the Ministry of Agrarian Development, serving the former. The dissolution of the Ministry of Agrarian Development in 2016 and its reconstitution as a unit of the Ministry of Social Development underscored what had been largely the case from the beginning: support for small, family farming is often considered more a matter of social welfare than productive development.125

Correcting this deficit and building an extension service—more precisely, one that can jointly develop adjustment strategies with smallholders themselves—is largely a matter of political will closely tied to the national prospects for development. Brazil has accumulated significant experience in recent decades with programs providing support services to firms—successfully in sectors such as aeronautics, unsuccessfully in sectors such as capital goods for the nascent offshore oil industry, and with mixed results in helping to revitalize clusters of small- and medium-sized firms producing shoes, furniture, and garments. The smallholder sector in agriculture is vast and highly differentiated; 36 percent of Brazil’s 5.17 million farms have less than five hectares, and small- and medium-sized firms up to a hundred hectares produce more than half the food consumed domestically, supplying a key assurance of food security.126 Its significance to the overall growth prospects is increased by the fact that Brazil is among the many middle-income countries in Latin America, Africa, and Asia in which industry’s share of the national income has stagnated or gone into decline. At the moment, there are few good alternative employment opportunities for the low-productivity rural population. To accept stagnation in this sector is thus to mortgage growth. Under these circumstances, the failure to find the political will to develop extension services for smallholders is not so much a failure to reconcile development with sustainability as one of development itself.

CONVERGENCE WITH INDONESIA: THE JURISDICTIONAL APPROACH

As we noted at the outset, Brazil is not alone in facing this challenge. Indonesia, to take only the most prominent example, has come by a distinct but similar path to the same crossroads.

In Indonesia, the chief threat to the tropical forest is from the cultivation of palm oil, the cheapest and fastest-growing source of vegetable oil for wide use in cooking as well as an ingredient in processed foods, cosmetics, and biofuels. As in Brazil, smallholders are family owners mixing commercial and subsistence activity on plots that are small relative to those of the large neighbors that hem them in. In Indonesia, too, smallholders frequently encroach on state forests, and as of 2014, they account for about 40 percent of the national palm oil production. Land conflicts are common and increasing, especially those between large commercial estates or plantations and the most politically marginal groups with the weakest land claims. As in Brazil, smallholder yields are low compared to best-practice benchmarks (40 percent lower); yields of commercial operations can be twice as high. The difference is again due to the smallholders’ reliance on inferior planting materials and management techniques. In Indonesia also, recent regulatory changes, including a mandatory requirement that all farms and firms in the palm oil supply chain comply with a national code of environmental conduct, impose burdens on small producers that they will unlikely master without substantial assistance—even as those regulations create, as in Brazil, a legal safe harbor for large landowners.127

Unsurprisingly, then, proposals for continued reform in Indonesia converge with those in Brazil in seeing enhanced, place-based governance as the crucial instance for linking current land use and supply chain regulation to the construction of a support system for smallholders. The starting point of this “jurisdictional” or “territorial” approach is the recognition that local government is too often ignored in supply chain measures and land reform. Given the heterogeneity of the circumstances, place-based regulation should be at the center of a network coordinating the efforts of local regulators and service providers, farmer organizations, and producer associations and NGOs. As in the LAWPRO reforms in Ireland, the aim is to make environmental sustainability routine and routinely effective by institutionalizing joint problem-solving among the implicated ground-level actors, public and private. Because the green pacts formed in response to priority listing took the municipality as the operating unit, and demonstrated both the potential and pitfalls of inclusive participation, they are seen as a promising model for further development.128 An extensive network of NGOs is conducting pilot projects under the rubric of the jurisdictional approach in Indonesia, Brazil, and a number of other tropical forest countries.

This convergence can be taken as a first and limited corroboration of the plausibility of the approach. Barring a merciless campaign of expelling smallholders from the countryside at incalculable environmental, economic, and social costs, the alternatives for the tropical forest generally are the same as those for Brazil: inclusive, sustainable growth, or a grinding stalemate at the expense of both growth and the environment.129 The lessons of recent experiences—both the surprising successes and the limits—is that urgent efforts to avoid the worst could well produce an outcome better than we may have dared to hope.

Innovation in Place

Taken together, these three case studies of contextualization are reason for optimism. They provide evidence that the institutional conditions needed for both technological innovation and political accountability are falling into place in key sectors in countries leading global warming policy. They also show experimentalist institutions coming to grips with three broad areas of the economy—agriculture, electricity, and forestry—that must make the most profound changes in technology and behavior if deep cuts in emissions are to be achieved.

While the details differ across the cases, three common themes emerge.

First, no matter the sector, political pressures and technological changes lead to profound technical, economic, and social uncertainty that must be addressed locally. In agriculture and forestry, uncertainty arises from new political pressures for a radical reduction in environmental footprints. In electric power generation, uncertainty arises from a combination of political pressures along with an unexpectedly rapid change in the technologies for generating and managing renewable power that create completely novel configurations of power flow on electric grids. General responses to this uncertainty—regulations fixing nitrogen limits or acceptable forms of battery interconnection; forest codes; or set forms of using public land—simply do not work out of the box, or at all, when applied to the particularities of local circumstances. Contextualization is needed.

Second, uncertainty creates strong pressures to experiment with new technologies, business practices, and regulatory frameworks, especially at the local level. Pollution control, and deep decarbonization more generally, requires close and continuing attention to what happens on the ground. As we have stressed, what works in one place must often be adapted or reinvented to work in another. Given the dependence of modern production systems—whether agricultural or industrial—on the rapid detection and correction of errors, efforts to improve economic performance likewise demand unswerving attention to context. In order to make clean production economically feasible, deep decarbonization will have to follow these models in laying new emphasis—by public and private actors alike—on local decision-making. In Brazil, municipalities and states learned from each other as innovative ideas of how to comply with environmental penalty defaults proved their worth; in Ireland, local experience is pooled in catchment area programs designed with that purpose; and in California, the state regulators socialize the results of local projects. Governance, no less than technology, is being adapted or reinvented across local contexts as the response to climate change advances.

Third, and in the long run most significant, the new centrality of place and the governance changes that accompany it put pressure on the organization of democracy itself. Sometimes—but rarely, we expect—the shift to the local will be narrow and self-limiting, as in the case of renewables in California. Much more typically, we think, growing local concern with sustainability will lead to changes in governance that spark new forms of democratic engagement. In both Ireland and the Amazon, collaborative review led to new forms of participation—the green municipality pacts in the Amazon, and the LAWPRO process for determining how and in what order to tackle cleanup projects—that promise key actors a central role in decision-making. In both cases, moreover, actors are asked not only to express their preferences but also to collaborate in the production of complex public goods—extension services—customized to their needs, and contributing to their own capacity to develop further. The challenge for decarbonization in both places, of course, is to make these reforms succeed. Progress at the frontier of decarbonization depends on innovations that change the way citizens make themselves heard in government—and the nature of the goods they can expect from it.

This last conclusion is necessarily tentative. But it would be as misleading to overlook the new forms of participation overflowing the banks of the familiar model of local democracy as it would be to claim that the flood had already cut and settled into a new course. One consequence of the new localism is already clear: the growing importance of local decision-making—including action at the level of sectors as well as places—diminishes the importance of action at the level of the entire globe. The story of the Amazon shows that even the biggest problems have to be solved close to the ground. In the next chapter, we ask what role this vision leaves for the global climate regime we have inherited.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!