Military history

Chapter 16

The Revolution in Military Affairs

[T]he revolution in military affairs may bring a kind of tactical clarity to the battlefield, but at the price of strategic obscurity.

—Eliot Cohen

This “operational” approach to war was never tested in the circumstances for which it was designed. At the end of the 1980s, Soviet communism imploded and the Warsaw Pact soon evaporated, taking with it the possibility of another great power war in the middle of Europe. The American military soon came to be preoccupied with a quite different set of problems. Because circumstances had changed so much this might have provided good reason to challenge the operational approach, but instead it became even more entrenched, now spoken of as a revolution in military affairs.

There was no need to worry about an extremely large and capable enemy. The efforts the Americans had put into new technologies had created a quality gap with all conceivable opponents, while the greater stress on operational doctrine made it possible to take advantage of superior intelligence and communications to work around opponents. Almost immediately, there was a demonstration of the new capabilities. Iraq occupied its neighbor Kuwait in August 1990; early the next year, a coalition led by the United States liberated Kuwait. Up to this point the impact of improvements in sensors, smart weapons, and systems integration were untested hypotheses. Skeptics (including Luttwak) warned of how in a war with Iraq the most conceptually brilliant systems could be undermined by their own complexity and traditional forms of military incompetence.1 Yet in Operation Desert Storm the equipment worked well: cruise missiles fired from a distance of some one thousand kilometers navigated their way through the streets of Baghdad, entered their target by the front door, and then exploded.

This very one-sided war displayed the potential of modern military systems in a most flattering light. The Iraqis had boasted of the size of their army, but much of its bulk was made up of poorly armed and trained conscripts facing professional, well-equipped forces with vastly superior firepower. It was as if they had kindly arranged their army to show off their opponent’s forces to best advantage. A battle plan unfolded that followed the essential principles of Western military practice against a totally outclassed and outgunned enemy who had conceded command of the air. A tentative frontal assault saw the Iraqis crumble, yet General Norman Schwarzkopf went ahead with a complex, enveloping maneuver to catch them as they retreated, but did not quite cut them off quickly enough. The Americans still announced a ceasefire, deliberately eschewing a war of annihilation. This reflected a determination to keep the war limited and not allow success in reaching the declared goal—liberation of Kuwait—to lead to overextension by attempting to occupy all of Iraq. This made good diplomatic and military sense, yet the consequence illustrated the arguments favoring decisive victories. Saddam Hussein was able to survive and the outcome of the war was declared at best incomplete.2

The idea that this campaign might set a pattern for the future, to the point of representing a revolution in military affairs, can be traced back to the Pentagon’s Office of Net Assessment (ONA), led by Andrew Marshall, a redoubtable veteran of RAND. He was aware that during its last years there had been talk in the Soviet Union of a “military technical revolution” that might bring conventional forces up to new levels of effectiveness. Marshall became convinced that the new systems were not mere improvements but could change the character of war. After the 1991 Gulf War, he asked one of his analysts, Army Lieutenant Colonel Andrew F. Krepinevich, who had been working on what had become the non-issue of the military balance between NATO and the Warsaw Pact, to examine the combined impact of precision weapons and the new information and communication technologies.3

By the summer of 1993, Marshall was considering two plausible forms of change in warfare. One possibility was that the long-range precision strike would become “the dominant operational approach.” The other was the emergence of “what might be called information warfare.”4 At this point he began to encourage the use of the term “revolution in military affairs” (RMA) instead of “military-technical revolution” to stress the importance of operational and organizational changes as well as technological ones.5 Krepinevich described the RMA in 1994 as what occurs when the application of new technologies into a significant number of military systems combines with innovative operational concepts and organizational adaptation in a way that fundamentally alters the character and conduct of conflict . . . by producing a dramatic increase—often an order of magnitude or greater—in the combat potential and military effectiveness of armed forces.6

Although the origins of the RMA lay in doctrine, the driver appeared technological, a consequence of the interaction between systems that collected, processed, and communicated information with those that applied military force. A so-called system of systems would make this interaction smooth and continuous.7 This concept was particularly appropriate in a maritime context. At sea, as in the air, it was possible to contemplate a battlespace empty of all but combatants. Even going back to the Second World War, air and sea warfare offered patterns susceptible to systematic analysis, which meant that the impact of technical innovations could be discerned.

By contrast, land warfare had always been more complex and fluid, subject to a greater range of influences. The promise of the RMA was to transform land warfare. The ability to strike with precision over great distances meant that time and space could decline as serious constraints. Enemy units would be engaged from without. Armies could stay agile and maneuverable, as they would not have to move with their own firepower, except for that required for self-defense. Instead, they could call in what was required from outside. Reliance on non-organic firepower would reduce dependence upon large, cumbersome, self-contained divisions, and the associated potential for high casualties.8 While enemy commanders were still attempting to mobilize their resources and develop their plans, they would be rudely interrupted by lethal blows inflicted by forces for whom time and space were no longer serious constraints. The move away from the crude elimination of enemy forces could be completed by following the Boyd line of acting more quickly and moving more deftly, thus putting enemy commanders in a position where resistance would be futile. Enthusiasts hovered on the edge of pronouncing the “fog of war” lifted and the problem of friction answered.9 At the very least, warfare could move away from high-intensity combat to something more contained and discriminate, geared to disabling an enemy’s military establishment with the minimum force necessary. No more resources should be expended, assets ruined, or blood shed than absolutely necessary to achieve specified political goals.

All of this created the prospect of relatively civilized warfare, unsullied by either the destructiveness of nuclear war or the murky, subversive character of Vietnam-type engagements. It would be professional war conducted by professional armies, a vision, in Bacevich’s pointed words, “of the Persian Gulf War replayed over and over again.”10 The pure milk of the doctrine is found in a publication of the National Defense University of 1996 which introduced the notion of “shock and awe.” The basic message was that all efforts should be focused on overwhelming the enemy physically and mentally as quickly as possible before there was a chance to react. “Shock and awe” would mean that the enemy’s perceptions and grasp of events would be overloaded, leaving him paralyzed. The ultimate example of this effect were the nuclear strikes of Hiroshima and Nagasaki, which the authors refused to rule out as a theoretical possibility, though they were more intrigued by the possibility of disinformation, misinformation, and deception.11

The influence of such ideas was evident in the 1997 paper “Joint Vision 2010.” It defined information superiority largely in war-fighting terms as “the capability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary’s ability to do the same.”12 By means of “excellent sensors, fast and powerful networks, display technology, and sophisticated modeling and simulation capabilities,” information superiority could be achieved. The force would have “a dramatically better awareness or understanding of the battlespace rather than simply more raw data.” This could make up for deficiencies in numbers, technology, or position, and it could also speed up command processes. Forces could be organized “from the bottom up—or to self-synchronize—to meet the commander’s intent,” leading in turn to “the rapid foreclosure of enemy courses of action and the shock of closely coupled events.” There would be no time for the enemy to follow Boyd’s now-famous OODA loop. Arthur Cebrowski and John Garstka argued that a form of “network-centered warfare” could make battles more efficient in the same way that the application of information technology by businesses was making economies more efficient.13 In discussing the move from platform-centered to network-centered warfare, the Pentagon largely followed this formulation (Garstka was one of the authors) and recognized that, following the physical and information domains, there was a cognitive domain. Here was found the mind of the warfighter and the warfighter’s supporting populace.

Many battles and wars are won or lost in the cognitive domain. The intangibles of leadership, morale, unit cohesion, level of training and experience, situational awareness, and public opinion are elements of this domain. This is the domain where commander’s intent, doctrine, tactics, techniques, and procedures reside.14

This form of warfare suited the United States because it played to U.S. strengths: it could be capital rather than labor intensive; it reflected a preference for outsmarting opponents; it avoided excessive casualties both received and inflicted; and it conveyed an aura of almost effortless superiority. Those ideas were deeply comforting, and not entirely wrong. Information and communication technologies were bound to make a difference in military practice, although the RMA agenda understated the extent to which American predominance was dependent on not only the sophistication of its technology but also the sheer amount of firepower—particularly air-delivered—at its disposal. Furthermore, while the United States’ evident military superiority in a particular type of war was likely to encourage others to fight in different ways, that military capacity would also constrain opponents’ ambitions. As a regular conventional war against the United States appeared to be an increasingly foolish proposition, especially after its convincing performance in the 1991 Gulf War, one form of potential challenge to American predominance was removed, just as the prospect of mutual assured destruction had earlier removed nuclear war as a serious policy option.

Nonetheless, the presentation of the RMA was shaped by political preferences about the sort of war the Americans would like to fight. It offered a neat fit between a desire to reduce the risks of high casualties or Vietnamstyle campaigns and a Western ethical tradition that stressed discrimination and proportionality in warfare. It assumed professional conventional forces, as high-quality weaponry reduced the relative importance of numbers and put a premium on extremely competent troops. Intolerance of casualties and collateral damage meant targeting military assets rather than innocent civilians. It also precluded resort to weapons of mass destruction. The military would be kept separate from the civil, combatants from noncombatants, fire from society, and organized violence from everyday life. Opponents would be defeated by means of confusion and disorientation rather than slaughter because they could never get out of their OODA loop. If this trend could be pushed far enough, it was possible at some point to envisage a war without tears, conducted over long distances with great precision and as few people as possible—preferably none at all—at risk. The objective was to reduce the role in war-fighting of anything recognizably approaching “battle.” The ideal would be one-sided and highly focused engagements geared to causing cognitive confusion. Far from representing a real revolution, the RMA harked back to the earlier, idealized prototype of a decisive military victory settling the fate of nations—indeed of whole civilizations—except that now the accomplishment could be virtually painless for the greatest military power the world had ever seen.

There was an unreal quality to this view of future warfare. It was for political entities that were not fearful, desperate, vengeful, or angry; that could maintain a sense of proportion over the interests at stake and the humanity of the opponent. It was a view that betrayed a detached attitude to thewellsprings of conflict and violence, the outlook of a concerned observer rather than a committed participant. It ignored the physicality of war and war’s tendencies to violence and destruction. It would hardly be a revolution in military affairs if those who embraced it only took on conflicts which promised certain and easy victories. The 1991 Gulf War vindicated this vision, but that was helped by Saddam Hussein’s ignorance of the real military balance. In this respect, the vindication carried its own refutation. Future opponents were bound to take more care when inviting battle with the United States given the proven vulnerability of second-rate conventional forces to attacks by first-rate powers. After 1991 it was unclear who would fight such a war. The American military literature referred to “peer competitors” with comparable military endowments to those of the United States, but it was unclear exactly who these might be. In addition, for a war to be fought along these lines, the belligerents must not only have comparable military capabilities but also inhabit the same moral and political universe. The model was geared to American strengths and for that very reason was unlikely to be followed by opponents who would seek to exploit the presumed American weaknesses of impatience and casualty intolerance. Enemies would be inclined to cause hurt in an effort to encourage a sense of disproportion in the population and unhinge multilateral coalitions.

Precision warfare made it possible to limit but also to maximize damage. Just as high accuracy made it possible to avoid nuclear power plants, hospitals, and apartment blocks, it also made it possible to score direct hits. Even in the American model there were always dual-use facilities that served both military and civilian purposes—for example, energy and transportation. Targeting them as part of a military purpose still led to the disruption of civilian life. In other respects the new technologies encouraged a progressive overlap between the civilian and military spheres. High-quality surveillance, intelligence, communications, and navigation became widely available as consumer gadgets, which could be exploited by crude, small organizations with limited budgets. Lastly, nuclear weapons and long-range missiles (whose arrival had also been described at the time as a “revolution in military affairs”) had expanded the means of destruction and extended the range of its potential application. Attempts to mitigate their effects—for example, through improving anti-missile defenses—had been unimpressive. The capability to destroy hundreds of thousands of human beings in a nuclear flash had not disappeared.

Asymmetric Wars

When a country was in desperate straits and facing defeat in conventional war, attacking the enemy’s society might appear to be the only option left. That is why the history of the twentieth-century war had been so discouraging to those who believed that military power could be contained in its effects. There were a series of measures that the weak could adopt against the strong: concentrating on imposing pain rather than winning battles, gaining time rather than moving to closure, targeting the enemy’s domestic political base as much as his forward military capabilities, and relying on an unwillingness to accept extreme pain and a weaker stake in the resolution of the conflict. In short, whereas stronger military powers had a natural preference for decisive battlefield victories, the weaker were more ready to draw the civilian sphere into the conflict while avoiding open battle.

The optimal strategies for those unable to match America’s conventional military capabilities (almost everyone) would be to attempt to turn the conflict into what came to be described as an “asymmetric war.” This concept had been around since the 1970s, as a reflection of the Vietnam experience.15 Its resurrection began in the mid-1990s, when it began to refer to any engagement between dissimilar forces. All conflicts were between forces that varied in some respects, in geography or alliance as well as in force structure and doctrine. Part of strategy would always be identifying aspects of those differences that generated special opportunities and vulnerabilities. Even when the starting points were relatively symmetrical, the aim would be to identify and describe a critical asymmetry as the vital advantage to secure a victory. The only reason symmetry had worked in the nuclear sphere as mutual assured destruction was because it had resulted in a degree of stability. In the conventional sphere, symmetrical forces were potentially a recipe for mutual exhaustion.

As with so many of these concepts, inconsistent and expansive definitions of asymmetry began to drain it of meaning. The 1999 Joint Strategy Review defined asymmetric approaches as those that attempted “to circumvent or undermine US strengths while exploiting US weaknesses using methods that differ significantly from the United States’ expected method of operations.” These could be applied “at all levels of warfare—strategic, operational, land tactical—and across the spectrum of military operations.” Put this way, the approach became synonymous with any sound strategy for fighting the United States and lost any specificity.16 The real interest in asymmetrical warfare was in situations where the two sides would be seeking to fight completely different sorts of war, particularly when Americans persevered with regular warfare while opponents either escalated to weapons of mass destruction or adopted forms of irregular war.

The greatest dangers were associated with an enemy that had weapons of mass destruction, but the most likely scenario was being drawn into irregular war. Since Vietnam, the U.S. military had taken the view that rather than make better preparations for irregular war it was best to stay clear of potential quagmires. This tendency had been reinforced by the most celebrated account of Vietnam to emerge out of the army. Harry Summers, an instructor at the Army War College, invoked Clausewitz in explaining how the American focus on counterinsurgency distracted them from the essentially conventional nature of the war. Summers made his point by working backwards from the final victory in 1975 of the North Vietnamese army over the South. This possibility was always inherent in the North’s strategy, but that did not mean that the prior insurgency in the South had somehow been irrelevant. For one critic, who had been closely involved in the counterinsurgency during the 1960s, the problem was that the U.S. army paid insufficient attention to the demands of guerrilla warfare and not that they had neglected the enemy’s “main force.”17

The persistent resistance to Vietnam-type engagements was reflected in a distinction between war defined in terms of “large-scale combat operations,” to which U.S. forces were geared, and “operations other than war,” which included shows of force, operations for purposes of peace enforcement and peacekeeping, and counterterrorism and counterinsurgency, which had a much lower priority.18 Wariness of irregular wars meant reluctance to develop doctrine and training to accommodate them. It was assumed that forces optimized for large-scale conventional war would be able to accomplish other, supposedly less demanding tasks if absolutely necessary. The relatively small-scale contingencies that became common in the 1990s were in effect dismissed as secondary and residual, an inappropriate use of armed forces, apt to tie them down and catch them in vicious crossfire while conducting marginal political business that did not even touch on the nation’s most vital interests.19

On September 11, 2001, the United States suffered a unique and unexpected attack, which took the notion of asymmetry to the extreme. A low- budget plan hatched by a small band of Islamist radicals camped in one of the poorest parts of the world was directed against the icons of American economic, military, and political strength. Two planes crashed into the World Trade Center in New York, one into the Pentagon in Washington, and another would have hit the White House or Capitol Hill had it not been for passenger action which forced the plane to crash. It did not take long to identify the group responsible—al-Qaeda, an extreme Islamist group based in Afghanistan and protected by its ideological soul-mates in the Taliban government.

The government’s response was to declare a “war on terror” and launch a military campaign designed to overthrow the Taliban and break up al- Qaeda. Although the provocation was on al-Qaeda’s terms, the response was on America’s. The Taliban was defeated in a quasi-regular war because the Americans were able to come in on the side of the Afghan opposition (the Northern Alliance), who provided the infantry while the United States supplied communications, airpower, and the occasional bribe to help encourage the factions on the enemy side to defect. On this basis, President George W. Bush concluded that the campaign had shown that “an innovative doctrine and high-tech weaponry can shape and then dominate an unconventional conflict.” It was a triumph of information-age warfare, with commanders “gaining a real-time picture of the entire battlefield” and then being able to “get targeting information from sensor to shooter almost instantly.” There were romantic images of U.S. Special Forces riding on horseback calling in air strikes. Bush claimed that the conflict had “taught us more about the future of our military than a decade of blue ribbon panels and think- tank symposiums,”20 suggesting that this approach had a wider application beyond the special conditions of Afghanistan in late 2001. The next stage reflected this perception. Instead of devising a plan to deal with radical Islamist movements, the United States embarked on a campaign to topple the regime of Saddam Hussein in Iraq, because Saddam was suspected of possessing weapons of mass destruction and was thus a potential source for any terrorist group that wished to inflict even more terrible damage upon the United States. Again the United States was able to demonstrate convincing superiority in conventional military capabilities as the Iraqi regime was overthrown in short order.

The Afghan and Iraqi campaigns were both apparently decisive; hostile regimes were toppled quickly after their forces were overwhelmed. In neither case, however, did this settle the matter. Secretary of Defense Donald Rumsfeld had been seeking to make a point about how a war could be fought and won with far fewer forces than would hitherto have been thought prudent. This point was made, although against an enemy barely able to resist.21 The lack of numbers soon appeared imprudent as U.S. forces struggled to cope with an insurgency. The transition from old to new regime was further complicated by the fact that the political claim that justified the invasion— that Iraq was illicitly developing weapons of mass destruction—was shown to be in error. This encouraged the development of a new rationale based on helping Iraq make the transition to democracy, a task made even harder by the U.S.-led coalition’s lack of troops to manage what soon became a deteriorating security situation. Out of the minority Sunni community, which had provided the key figures in the old elite, came the hardest resistance. The Sunnis gained support from those humiliated by Iraq’s occupation and fearful of the loss of their power. Their numbers swelled with disbanded military members and volunteers from the many unemployed young men. It included “former regime elements” and a strong al-Qaeda group led by the Jordanian Abu Musab al-Zarqawi, who was as keen to foment civil war with the majority Shiites as he was to expel the Americans. Although Shiites were the natural beneficiaries of the toppling of the Iraqi regime, radical elements from within this community, led by Muqtada al-Sadr, also turned on the Americans. The struggles faced by American forces after such apparently effortless victories demonstrated that victory in battle did not necessarily result in a smooth political transition. It also demonstrated that whatever strengths the Americans had in regular warfare, they coped poorly with irregular warfare.

With U.S. authority under constant challenge and its troops being caught by ambushes and roadside bombs, there were contrary pressures to both reduce profile and put on forceful displays of strength. The coalition was soon militarily stretched and lacking in political credibility. A poor security situation hampered economic and social reconstruction, the lack of which contributed to security problems. Having ignored counterinsurgency for over three decades, American forces struggled. They would move through towns and villages and clear them of insurgents in a show of strength, but without sufficient American troops left behind, the enemy could soon return. This meant that the local population had no incentive to cooperate with the Americans. Attempts were made to build up local security forces, but these were often infiltrated by the militias. U.S. troops had not been trained to withhold fire, avoid rising to provocations, and find ways to reach out to wary local people. They found it hard to separate insurgents from innocent civilians and soon became suspicious of everyone, which added to the sense of mutual alienation.

More effort was going into intimidating opponents than winning over the undecided. An analysis of operations conducted from 2003 to 2005 suggested that most were “reactive to insurgent activity—seeking to hunt down insurgents.” Few operations were “directed specifically to create a secure environment for the population.”22 The strategy of “cordon and sweep” put the onus on holding territory and killing the enemy. Whatever the military effects of this approach, the political effects were invariably detrimental.

The perplexing situation in which American forces found themselves resulted in a resurgence of thinking about counterinsurgency, led by officers frustrated by the institutional barriers that had been set up to deny the relevance of irregular forms of warfare. Military Review, the house journal of the Combined Arms Center at Fort Leavenworth, barely covered the issue before 2004. Soon it was averaging about five articles on the topic per issue.23 The old classics of guerrilla warfare—from T. E. Lawrence to David Galula— began to be rediscovered. Officers with a knowledge of past counterinsurgency practice (for example, John Nagl) began to advise on their application to Iraq.24 David Kilcullen, an Australian officer on loan to the U.S. military, became one of the first postcolonial counterinsurgency theorists, updating the more timeless lessons by incorporating the efforts by al-Qaeda and like- minded groups to establish a form of global insurgency that ignored national boundaries. Kilcullen explored the extent to which ordinary people turned into “accidental guerrillas” less because of their support of extremist ideologies than their resentment at foreign interference in their affairs. To prevent al-Qaeda turning itself into a global insurgency, it had to be disaggregated into separate, manageable pieces. To prevent it prospering within the information environment, the counterinsurgents needed to recognize this to be as important as the physical environment.25

The leader of the new counterinsurgency effort was General David Petraeus. He noted the problems that had arisen because the United States had become embroiled in a war for which it had not prepared, and he stressed the political dimension of the problem, emphasizing that it was not just a matter of military technique. “Counterinsurgency strategies must also include, above all, efforts to establish a political environment that helps reduce support for the insurgents and undermines the attraction of whatever ideology they may espouse.”26 At the start of 2007, when the United States appeared to be on the brink of abandoning Iraq to civil war, President Bush decided on one last push. Petraeus was put in charge of what became known as the “surge,” although this overstated the importance of numbers as opposed to a new strategy.27 Over the course of the year, there were definite signs of improvement, and this came to be seen as a turning point in the conflict in terms of mitigating the push to civil war if not meeting the early American aspirations of turning Iraq into a liberal democracy.

The improvement was not so much the result of extra troops and the intelligence with which they were deployed, although these were important, but more due to the extent to which the Iraqis turned away from the logic of civil war, notably with a strong reaction among the Sunnis to the brutality of al-Qaeda. As the number of attacks on Shiite sites declined, there was less excuse for revenge attacks on Sunnis. Using American military strength to reinforce these trends required a more subtle approach to Iraqi politics than simply handing responsibility for security back to the Iraqi government as soon as possible, whether or not they were able to cope. This meant that the Americans were working with the grain of Iraqi politics rather than against it.

War into the Fourth Generation

To what extent did the experience of the 2000s represent a trend or a set of unusual circumstances, unlikely to be repeated? For those who took the former view, there was a theoretical framework that had some credibility because it could easily accommodate international terrorism. It came under the broad heading of “fourth-generation warfare.” Like the RMA, this framework had parentage in in OODA loops and maneuver warfare, but it had taken a quite different turn, away from regular war.2 8 Its origins lay in an article by a group led by William Lind, a follower of Boyd and energetic reformer.29 According to this scheme, the first three generations had developed in response to each other (line and column, massed firepower, and then blitzkrieg). The new generation began in the moral and cognitive spheres, where even physically strong entities could be victims of shock, disorientation, and loss of confidence and coherence. This principle was then applied to society as a whole. In the fourth generation, attacks would be directed at the sources of social cohesion, including shared norms and values, economic management, and institutional structures. This was a move from the artificial operational level to a form of upside-down grand strategy, bringing in questions of rival ideologies and ways of life, and forms of conflict that might not actually involve much fighting.

With cataclysmic great power clashes apparently things of the past, the idea that new wars were wholly to be found in and around weak states persisted. A growing amount of international business appeared to involve states suffering from internal wars.30 The engagement of Western powers in these conflicts was, however, considered discretionary (they were often described as “wars of choice”) and undertaken on a humanitarian basis to relieve distress. Though they raised issues outside of military operations, such as economic reconstruction and state-building, they had only a loose fit with the fourth- generation theory. If anything, they were distractions to the more tough- minded of the fourth-generation theorists.

Although the RMA shared the same origins, it could only point to a singular form of regular warfare, which was unlikely to be fought because it suited the United States. Fourth-generation warfare, on the other hand, pointed to almost everything else, which is why there were so many versions of the theory. One strand, most associated with Lind, focused on an eating away of American national identity as a result of unconstrained immigration and multiculturalism. He argued this was less a reflection of social trends and more the result of a deliberate project by “cultural Marxists.” Cultural damage appeared as the product of deliberate and hostile moves, by enemies aided and abetted by naive and wrong-thinking elements at home, rather than of broader and more diffuse social trends or economic imperatives. Another, more influential strand, most associated with Marine Colonel Thomas X. Hammes, concentrated on irregular war, especially the forms of terrorism and insurgency which caused the United States such grief during the 2000s.31

There were five core themes in the fourth-generation literature. First, it followed Boyd’s focus on the moral and cognitive domains as where wars are won or lost. Second, there was a conviction that the Pentagon was mistaken in its focus on high-technology, short wars. Third, tendencies toward globalization and networks were presented as blurring established boundaries between war and peace, civilian and military, order and chaos. War could not be contained in either time or space. It spanned the “spectrum of human activity” and was “politically, socially (rather than technically) networked and protracted in duration.” Fourth, the enemies were not easy to find or pin down. Chuck Spinney, another former associate of Boyd, described the fourth-generation warriors as presenting few, if any, important targets vulnerable to conventional attack, and their followers are usually much more willing to fight and die for their causes. They seldom wear uniforms and may be difficult to distinguish from the general population. They are also far less hampered by convention and more likely to seek new and innovative means to achieve their objectives.32

Fifth, because these conflicts were played out in the moral and cognitive domains, any military action must be considered as a form of communication.

Lind argued in the original formulation: “Psychological operations may become the dominant operational and strategic weapon in the form of media/ information intervention.”33

As a coherent theory it soon evaporated, not only because of the different strands but also because it depended on a historical schema which did not work. War had never been solely based on regular battle, supposedly at the center of the three previous generations. Moreover, even virtuosos of irregular warfare, such as T. E. Lawrence and Mao Zedong, still accepted that only regular forces could seize state power. The fact that there might be a number of groups relying on irregular forms, from terrorism to insurgency, was a function of their weakness rather than a unique insight into the impact of new technologies and socioeconomic structures in the modern world. There was also a tendency to assume that unwelcome developments had one guiding cause.

In a similar vein, Ralph Peters argued that Western forces must prepare to face “warriors,” whom he characterized colorfully as “erratic primitives of shifting allegiances, habituated to violence, with no stake in civil order.” He described their approach to war in terms familiar to students of guerrilla warfare. They only stood and fought when they had an overwhelming advantage. “Instead they snipe, ambush, mislead, and betray, attempting to fool the constrained soldiers confronting them into alienating the local population or allies, while otherwise hunkering down and trying to outlast the organized military forces pitted against them.”34 This overstated the problem. Some might enjoy fighting for its own sake, but the most fearsome warriors were likely to be fighting for a cause or a way of life they held dear. The performance of guerrilla bands, militias, and popular armies was mixed to say the least.

Information Operations

A key element in the discussion of asymmetric warfare focused on what were unhelpfully known as “information operations.” The term was unhelpful because it referred to a series of related but distinctive activities, some concerned with the flow of information and others with its content. Its potential range was indicated by an official U.S. publication, which asserted as a goal achieving and maintaining “information superiority for the U.S. and its allies.” This required an ability “to influence, disrupt, corrupt, or usurp adversarial human and automated decision-making while protecting our own.” The mix of the automated and the human was reflected in references to electronic warfare and computer networks as well psychological operations and deception.35 All this reflected two distinct strands. The first was the traditional concern with changing the perceptions of others, and the second was the impact of digitized information.

When information was a scarce commodity it could be considered in similar ways as other vital commodities, such as fuel and food. Acquiring and protecting high-quality information made it possible to stay ahead of opponents and competitors. Such information might include intellectual property, sensitive financial data, and the plans and capabilities of government agencies and private corporations. This provided intelligence agencies with their raison d’être. While Clausewitz may have dismissed the importance of intelligence, it came to be of increasing value as improved means were found to gather information that opponents intended to keep secret. At first this depended on spies, and then the ability to break codes. As telegraphic communications came into use, their interception could provide evidence on enemy locations as well as messages. Breaking German signal codes during the Second World War gave the Allies a valuable advantage in a number of encounters. Then came the ability to take photographs from the air and later from space. It became progressively harder to stop opponents from picking up vital details about military systems and dispositions.

As more information began to be digitized—so that it became simpler to generate, transmit, collect, and store—and communications became instantaneous, the challenges became those of plenty rather than scarcity. There were large amounts of material that could be accessed, both openly and illicitly. Outsiders sought to hack their way through passwords and firewalls to acquire sensitive material, steal identities, or misappropriate funds. Another challenge was maintaining the integrity of information despite attempted disruption or tampering via the insidious forms of digital penetration known as viruses, worms, trojan horses, and logic bombs, which were often launched from distant servers for no obvious purpose—though there was sometimes a clear and malign intent. The bulk of this activity was criminal and fraudulent, but there were examples of large-scale downloading of government and corporate secrets by state-funded hackers, attacks that closed down governmental systems, mysterious viruses that affected weapons development programs, and damaged software that meant military equipment failed to work properly. Might an army of software wizards use insidious electronic means to dislocate the support systems of modern societies, such as transport, banking, and public health?

There was no doubt that attacks could cause inconvenience and irritation, and on occasion make a real difference. In the midst of operations, the military might find air defense systems disabled, missiles sent off-course, local commanders left in the dark, and senior commanders confused as their screens went blank. If they had bought in to the idea that fast-flowing data streams could eliminate the fog of war, they could be in for a rude shock. Even without enemy interference, a fog could be caused by a superfluity of information—too much to filter, evaluate, and digest—rather than the paucity of the past. Certainly the new information environment posed problems to governments in terms of what they could hope to control and their efforts to influence the news agenda. Ordinary people could spread images with cell phone photos and news, often inaccurate and half-digested, could spread on social networking sites, while governments were still trying to work out what was going on and shape a response.36

Did this amount to the danger identified by John Arquilla and David Ronfeldt in 1993, when they warned that “Cyberwar is Coming!”37 Their claim was that future wars would revolve around knowledge. They distinguished between “cyberwar,” which they limited to military systems (although it became expanded in later use) and “netwar,” which was more at the societal level. The issue was the same as for any new form of warfare: could it be decisive on its own? Or, as Steve Metz put it, could a “politically usable way” be found to damage an “enemy’s national or commercial infrastructure” sufficiently “to attain victory without having to first defeat fielded military forces?”38

The presumption that there might be a decisive cyberwar attack assumed that the offense would dominate and that the effects would be far-reaching, enduring, and uncontainable. The threat gained credibility from the frequency with which companies and even high-profile networks, including the Pentagon, were attacked by hackers. Protecting and managing privileged information against sophisticated foes who probed persistently for the weakest links in networks became a high priority. But effective attacks required considerable intelligence on the precise configuration of the enemy’s digital systems as well as points of entry into their networks. The possible anonymity and surprise of the attack might have its attractions, but any proposal to mount one would raise obvious questions about the likelihood of success against an alert opponent, the real damage that might be done, the speed of recovery, and the possibility of retaliation (not necessarily in kind). An opponent that had been really hurt might well strike back physically rather than digitally. Thomas Rid warned that the issue was becoming dominated by hyperbole. The bulk of “cyber” attacks were nonviolent in their intent and effect, and in general were less violent than measures they might replace. They were the latest versions of the classic activities of sabotage, espionage, and subversion. “Cyber-war,” he concluded, was a “wasted metaphor,” failing to address the real issues raised by the new technologies.39

Arquilla and Ronfeldt described “netwar” as “an emerging mode of conflict (and crime) at societal levels, short of traditional military warfare, in which the protagonists used network forms of organization and related doctrines, strategies, and technologies attuned to the information age.” In contrast to the large, hierarchical stand-alone organizations that conducted police and military operations, and which extremists often mimicked, the protagonists of netwar were “likely to consist of dispersed organizations, small groups, and individuals who communicate, coordinate, and conduct their campaigns in an internetted manner, often without a central com- mand.”40 Terrorists, insurgents, or even nonviolent radical groups would not need to rely on frontal assaults and hierarchical command chains but could “swarm,” advancing in small groups from many different directions using different methods in a network held together by cellphones and the web. In practice, the more visible manifestations took the form of “hacktivism,” a way of making political or cultural points rather than threatening the economy or social cohesion. Even if more determined adversaries were prepared to mount substantial attacks, the result would likely be “mass disruption” rather than “mass destruction,” with inconvenience and disorientation more evident than terror and collapse.41

The use of social networking, such as Facebook and Twitter, during the early days of the Arab Spring of 2011 illustrated how swarming could leave governments uncertain about how to cope with a rapidly developing public opinion. Such tactics followed well-established principles from before the information age. Radical groups, especially during their early stages, were often based on loose networks of individuals. To avoid attracting the attention of the authorities, they found it safer to operate as semi-independent cells, communicating with each other and their shared leadership as little as possible. To be sure, the Internet and the other forms of digitized communication made it easier to keep in touch, but the number of security breaches attributed to calls or electronic messages being traced still left them hesitant to talk too openly or too specifically. Moreover, radical networks required an underlying social cohesion or an attachment to a clear campaign objective to bring diverse individuals together. In order to prosper they needed to move beyond the cellular form. This required a leadership able to mobilize and then direct sufficient force to strike significant blows. It was difficult to move beyond being a nuisance and harassing the enemy to seizing control without an authoritative point of decision. As the Arab revolts of 2011—2013 demonstrated, regimes facing serious opposition did not reply with social networking of their own but with repression and force, and in the end, it was the possibility of armed rebellion and the readiness of the military to defend the regime that were crucial.

The initial focus was on the role of information flows in sustaining standard military operations, facilitating faster decision-making and ensuring more precise physical effects. The irregular warfare in the 2000s soon brought into focus the more traditional forms of information warfare, and the Americans appeared to be losing ground to apparently primitive opponents regarding how these conflicts, their stakes, and their conduct were perceived. Their opponents lacked physical strength but seemed to know how to turn impressionable minds. Superiority in the physical environment was of little value unless it could be translated into an advantage in the information environment. As this was the “chosen battlespace” of its foes, the United States was now required to learn to conceptualize its victories in terms of shaping perceptions over time rather than in terms of decisive engagements that annihilated the enemy.42 The issue was not so much the flow of data but the way that people thought.

The counterinsurgency struggles in Iraq and Afghanistan led to an almost postmodernist embrace of pre-rational and embedded patterns of thought that allowed individuals, and broad social groups, to be caught up in a particular view of the world. Major General Robert Scales sought to explain the contrast between the failure of Islamic armies when fighting conventional battles Western style and their far greater success in unconventional war. He developed the concept of “culture-centric warfare.”43 In facing an enemy that “uses guile, subterfuge, and terror mixed with patience and a willingness to die,” he argued, too much effort had been spent attempting to gain “a few additional meters of precision, knots of speed, or bits of bandwidth” and too little to create a “parallel transformation based on cognition and cultural awareness.” Winning wars required “creating alliances, leveraging nonmilitary advantages, reading intentions, building trust, converting opinions, and managing perceptions—all tasks that demand an exceptional ability to understand people, their culture, and their motivation.” This would be a “dispersed enemy” communicating “by word of mouth and back-alley messengers” and fighting with simple weapons that did “not require networks or sophisticated technological integration to be effective.”

One reflection of the growing recognition of cultural factors was that the Pentagon employed an anthropologist, Montgomery McFate, to consider the interplay between military operations and Iraqi society. Among the mistakes she identified were failures to appreciate the role of tribal loyalties as the established civilian structure of power collapsed, the importance of coffee shop rumors compared with official communications, and the meaning of such small things as hand gestures.44 The growing recognition of the importance of the ability to influence another’s view of the world was evident in the frequent references to “hearts and minds” in warnings about what was lost politically by indiscriminate and harsh military operations. The phrase came to be used whenever there was a need to persuade people through good works and sensitivity that security forces were really on their side, as part of a broader strategy of cutting militants off from their potential sources of support, including recruits, intelligence, sustenance, weapons and ammunition, and sanctuaries. The counterargument went back to Machiavelli—that it was better to be respected than loved, that opponents could be intimidated and demoralized by physical strength but encouraged in their opposition by concessions.

The problem was more an over-facile approach to the hearts-and-minds concept. In other contexts, “heart” and “mind” were pitted against each other—strong emotions versus cool calculations, appeals to values and symbols versus appeals to the intellect. This is reflected in an early use by the British General Sir Henry Clinton when facing a similar problem with the upstart Americans in 1776. The British, Clinton argued, needed to “gain the hearts and subdue the minds of America.”45 In practice, in discussions of countering both insurgency and terrorism, those opposed to brute force tended to stress gaining hearts more than subduing minds, as if provisions of goods and services could win the support of a desperate population.

There were three difficulties. First, as noted, local political loyalties would depend on local power structures and any measures would have to be judged in terms of their effects on these structures. Second, while there were undoubted benefits to repairing roads and building schools, or securing power and sanitation, these efforts would not get very far if security was so poor that foreign troops and local people were unable to interact closely and develop mutual trust. They were the sort of policies that might help prevent situations deteriorating but were less likely to help retrieve it once lost. A more minds-oriented approach might establish that trust by addressing questions about who was likely to win the continuing political and military conflict and the long-term agendas of the various parties. The insurgents could sow doubts about who among the local population could be trusted, about what was real and what was fake, about who was truly on one’s side and who was pretending. As the insurgents and counterinsurgents played mind games to gain local support, they could be as anxious to create impressions of strength as of kindness, to demonstrate a likely victory as well as to hand out largesse. In terms of the cognitive dimensions of strategy, this was as important as any feel-good effect from good works. Both would depend on the actual experiences of the local population and local leaders, and the mental constructs through which it was interpreted. The third problem was that this strategy required greater subtlety than just an awareness that different people had different cultures. It was hard to argue against an improved sensitivity about how others viewed the world and the need to avoid ethnocentrism. Culture was itself a slippery term, often being used as something that envelops individuals and shapes their actions without them being able to do much about it. The term could include almost anything that could not be explained by reference to hard-nosed matters of interest. Attempts to define another’s strategic culture often then came up with something remarkably coherent, without contradiction and almost impervious to change. At least among academics, this approach was largely giving way to a practice of referring to some received ideas that help interpret information and navigate events but which were subject to regular modification and development. We shall return to some of these ideas in the last section of this book when we develop the idea of “scripts.”46 The importance of an exaggerated view of culture was that it could lead to the assumption that alien attitudes and uncooperative behavior reflected the persistence of an ancient way of life, untouched by modern influences, asserting itself whatever the conditions.

Against the suggestion that individuals were socialized into hard cultures sharing assumptions, norms, patterns of behavior, and forms of mutual understanding that could be implicit, unspoken, or taken for granted— that were all but impenetrable to an outsider—was the possibility that in a dynamic situation where communities were being subjected to new influences and challenges, cultures were likely to develop and adjust, and become less effective in binding people together. Thus, observed Porter, in the literature on reconstructed Islamists, warrior peoples, and insurgencies fed by cultural difference, it was as if the people encountered did “not act but are acted upon by impersonal historical forces, taking orders from the culture; or that modes of warfare are singular and fixed by ancestral habit.” People were able to learn and accommodate within their cultures new types of weapons and forms of conflict. References to the durability of hatreds and the evocation of cultural symbols could encourage stereotypes of the primordial and the exotic as harmful as those that assumed that all people were seeking to remake themselves in a Western image. Explaining problematic behavior as a consequence of people being set in their ways was not only condescending but also let off the hook those in the intervening forces, whose actions might have prompted a hostile reaction, and underestimated the extent to which opponents in a prolonged conflict would interact and pick up ideas, weapons, and tactics from each other.47 The need to have convincing stories impressed itself on officers trying to work out how they coped with a vicious enemy while trying to stay on the right side of the people they were supposed to be helping. Kilcullen observed the insurgents’ “pernicious influence” drew on a “single narrative”—simple, unified, easily expressed—that could organize experience and provide a framework for understanding events. He understood that it was best to be able to “tap into an existing narrative that excludes the insurgents,” stories that people naturally appreciate. Otherwise it was necessary to develop an alternative narrative.48 It was not so easy for a complex multinational force to forge a narrative that could satisfy a variety of audiences. A British officer saw the value of one that not only helped explain actions but also bound together “one’s team, across levels of authority and function; the diplomatic head of mission, the army company commander, the aid specialist, the politician working from a domestic capital, for instance.” He recognized that this might lead to variations in the story, but so long as there was an underlying consistency this need not be a problem. But liberal democracies found it hard to generate consistent stories, or to appreciate the needs of the local front line as against those of the distant capital.49

A generally rueful collection of essays put together by the Marine Corps suggested that the United States had proved inept at “quickly adapting the vast, dominant, commercial information infrastructure it enjoys to national security purposes.”50 It was perplexing to have been caught out so badly by al-Qaeda, who seemed to be as brazen in their message as they were outrageous in their attacks. Yet in an apparent war of narratives the United States was on the defensive, preoccupied with challenging another’s message rather than promoting their own. Attempts were made to fashion notionally attractive communications without being sure how they were being received. In addressing their new target audiences, the Western communicators had to cope with rumors and hearsay, popular distrust of any reports from official sources, a reluctance to be told by foreigners what to think, and competition with a multitude of alternative sources. People filtered out what they did not trust or what they found irrelevant, or they picked up odd fragments and variants of the core message, interpreting and synthesizing them according to their own prejudices and frameworks.

Most seriously, there could be no total control over the impressions being created by either the actions of careless troops or the policy statements of careless politicians. There might be a group of professionals working under the label of information operations, but the audiences could take their cues from whatever caught their attention. The United States might have invented mass communications and the modern public relations industry, but these were challenges that went beyond normal marketing techniques. Those with backgrounds in political campaigning or marketing who were asked to advise on getting the message out in Iraq and Afghanistan often opted for short-lived projects that had no lasting effect. Moreover, these individuals knew that they would be judged by how their products went down with domestic audiences; thus, those were the groups to which they tended to be geared. Not only did this miss the point of the exercise but it could also blind policymakers, who often fell into the trap of believing their own propaganda. Jeff Michaels developed the idea of a “discourse trap” whereby the politically comfortable and approved language used to describe campaigns led policymakers to miss significant developments. By refusing to acknowledge that early terror attacks in Iraq could be the responsibility of anybody other than former members of the regime, for example, they missed the alienation of moderate Sunnis and the growth of Shia radicalism.51

Attempts to persuade individuals to see the world in a different light and change their views were difficult enough and required insight into their distinctive backgrounds, characters, and concerns. It was far harder to do this for a whole category of people from an unfamiliar culture with extremely significant internal currents and differences that would be barely perceptible to outsiders. It was important when conducting military operations to understand that their effects went well beyond the kinetic to influencing the way that those caught up in conflicts understood their likely course and what was at stake. This affected the way that allegiances and sympathies might be broken and put together. Understanding this could help avoid egregious errors that might alienate important sections of the population. But because it was hard to measure and pin down effects on beliefs, it was not surprising that commanders trusted the surer results of firepower.52 If the challenge was to reshape political consciousness to produce an alignment of views with powerful foreigners, there were bound to be limits to what could be done by the military. Favorable images, let alone whole belief systems, could not be fired directly into the minds of the target audience as a form of precision weapon. If there was a consolation, the success of al-Qaeda was also exaggerated. Modern communications media undoubtedly created opportunities for the almost instantaneous transmission of dramatic and eloquent images, and to any modern-day Bakunin there were extraordinary opportunities for “propaganda of the deed.”53 The same factors, however, that worked against successful official “information operations” could also work against the militants—random violence, irrelevance to everyday concerns, and messages that grew tedious with repetition.54 As Ben Wilkinson observed in a study of radical Islamist groups, the real problem was not the lack of a simple message but the implausibility of the cause and effect relationships they had to postulate if they were to convince themselves and their supporters of eventual success. This led them astray, caught by “bad analogies, false assumptions, misinterpretations and fallacies,” overstating the role of human agency, with little room for the accidental and the unpredictable. All this made for a bad case of “narrative delusion.”55 Radical strategists might be at special risk of narrative delusion, because of the size of the gap between aspirations and means, but it is one to which all strategists are prone.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!