Part III

How the Metaverse Will Revolutionize Everything

Chapter 12

When Will the Metaverse Arrive

IN PART II, I OUTLINED WHAT’S REQUIRED TO realize the full vision of the Metaverse, as I’ve defined it. This first chapter of Part III takes up the inevitable question that follows—When will the Metaverse arrive?—and predicts what arrival will look like across a range of industries.

Even those pouring tens of billions per year into the “quasi-successor state” to the internet tend to disagree on the timing of the Metaverse’s emergence. Satya Nadella, Microsoft’s CEO, has said that the Metaverse is “already here,” with Microsoft’s founder Bill Gates forecasting that in “the next two or three years, I predict most virtual meetings will move from 2D camera image grids to the metaverse.”1 Facebook’s CEO Mark Zuckerberg has said a “lot of [it] will become mainstream in the next five to 10 years,”2 while Oculus’s former and now consulting CTO John Carmack usually predicts an even later emergence. Epic’s CEO Tim Sweeney and Nvidia’s CEO Jensen Huang tend to avoid a specific timeline, instead saying the Metaverse will emerge over the coming decades. Google’s CEO Sundar Pichai merely says that immersive computing is “the future.” Steven Ma, the Tencent senior vice president who runs most of the company’s gaming business and publicly introduced the company’s “hyper digital reality” vision in May 2021, cautions that while “the metaverse’s day will come[,] that day is just not today. . . . What we see today is indeed a leap from what we had just a few years ago. But it’s also still primitive [and] experimental.”3

To predict the future of the internet and computing, it helps to review their intertwined past. Ask yourself: When did the mobile internet era begin? Some of us might date this history from the very first mobile phones. Others might point to the commercial deployment of 2G, the first digital wireless network. Perhaps it really began with the introduction of the Wireless Application Protocol standard in 1999, which gave us WAP browsers and the ability to access a (rather primitive) version of most websites from nearly any “dumbphone.” Or maybe the mobile internet era started with the BlackBerry 6000, or 7000, or 8000 series? At least one of them was the first mainstream mobile device designed for on-the-go wireless data. Most people, however, would likely say that the answer is linked to the iPhone, which arrived almost a decade after WAP and the first BlackBerry, nearly two decades after 2G, and 34 years after the first mobile phone call. It has since defined many of the mobile internet era’s visual design principles, economics, and business practices.

In truth, however, there’s never a moment when a switch flips. We can identify when a specific technology was created, tested, or deployed, but not when an era precisely began, or ended. Transformation is an iterative process in which many different changes converge.

Consider, as a case study, the process of electrification, which began in the late 19th century and ran midway into the 20th century, and focused on the adoption and usage of electricity, skipping past the centuries-long effort to understand, capture, and transmit it. Electrification was not a single period of steady growth, nor a process through which any one product was adopted. Instead, it consisted of two separate waves of technological, industrial, and process-related transformation.

The first wave began around 1881, when Thomas Edison stood up electric power stations in Manhattan and London. Yet while Edison was quick to commercialize electricity—he had created the first working incandescent light bulb only two years earlier—demand for this resource was low. A quarter century after his first stations, an estimated 5% to 10% of mechanical drive power in the United States came from electricity (two-thirds of which was generated locally, rather than from a grid). But then, rather suddenly, the second wave began. Between 1910 and 1920, electricity’s share of mechanical drive power quintupled to over 50% (nearly two-thirds of which came from independent electric utilities). By 1929, it stood at 78%.4

The difference between the first and second waves was not what portion of American industry used electricity, but the extent to which that portion did—and designed around it.5

When factories first adopted electrical power, it was typically used for lighting and to replace the on-premises source of power (usually steam). Owners did not rethink or replace the legacy infrastructure that would carry this power throughout the factory and put it to work. Instead, they continued to use a lumbering network of cogs and gears that were messy and loud and dangerous, difficult to upgrade or change, were either “all on” or “all off” (and therefore required the same amount of power to support a single operating station or the entire plant, and suffered from countless “single points of failure”), and struggled to support specialized work.

But eventually, new technologies and understandings gave owners both the reason and ability to redesign factories end-to-end for electricity, from replacing cogs with electric wires, to installing individual stations with bespoke and dedicated electrically powered motors for functions such as sewing, cutting, pressing, and welding.

The benefits were wide-ranging. The same factory now had considerably more space, more light, better air, and less life-threatening equipment. What’s more, individual stations could be powered individually (which increased safety, while reducing costs and downtime), and they could use more specialized equipment, such as electric socket wrenches.

Factory owners could configure production areas around the logic of the production process, rather than hulking equipment, and even reconfigure these areas on a regular basis. These two changes meant that far more industries could deploy assembly lines (which had first emerged in the late 1700s), while those that already had such lines could extend them further and more efficiently. In 1913, Henry Ford created the first moving assembly line, which used electricity and conveyor belts to reduce the production time per car from 12.5 hours to 93 minutes, while also using less power. According to the historian David Nye, Ford’s famous Highland Park plant was “built on the assumption that electrical light and power should be available everywhere.”6

Once a few factories began this transformation, the entire market was forced to catch up, thereby spurring more investment and innovation in electricity-based infrastructure, equipment, and processes. Within a year of its first moving assembly line, Ford was producing more cars than the rest of the industry combined. By its 10 millionth car, it had built more than half of all cars on the road.

The “second wave” of industrial electricity adoption didn’t depend on a single visionary making an evolutionary leap from Thomas Edison’s core work. Nor was it driven merely by an increasing number of industrial power stations. Instead, it reflected a critical mass of interconnected innovations, spanning power management, manufacturing hardware, production theory, and more. Some of these innovations fit in the palm of a plant manager’s hand, others needed a room, a few required a city, and all depended on people and processes. In aggregate, these innovations enabled what’s known as the “Roaring Twenties,” which saw the greatest average annual increases in labor and capital productivity in a hundred years, and propelled the Second Industrial Revolution.

An iPhone 12 in 2008?

Electrification can help us better understand the rise of mobile. The iPhone feels like a starting point for the mobile era because it united or distilled all of the things we now think of as “the mobile internet”—touch screens, app stores, high-speed data, instant messaging—into a single product that we could touch, hold in the palm of our hands, and use throughout each and every day. But the mobile internet was created—and driven—by so much more.

It wasn’t until the second iPhone, released in 2008, that the platform really began to take off, with sales increasing nearly 300% on a generational basis—a record that holds some 11 generations later. The second iPhone was the first to include 3G, which made the mobile web usable, and the App Store, which made wireless networks and smartphones useful.

Neither 3G nor the App Store were Apple-only innovations. The iPhone accessed 3G networks via chips made by Infineon that connected via standards led by groups such as the United Nations’ International Telecommunication Union and the wireless industry’s GSM Association. These standards were then deployed by wireless providers such as AT&T on top of wireless towers built by wireless tower companies such as Crown Castle and American Tower.

The iPhone had “an app for that” because millions of developers built them. These apps, in turn, were built on a wide variety of standards—from KDE to Java, HTML, and Unity—that were established and/or maintained by outside parties (some of whom competed with Apple in key areas). The App Store’s payments worked because of digital payments systems and rails established by the major banks. The iPhone also depended on countless other technologies, from a Samsung CPU (licensed in turn from ARM), to an accelerometer from STMicroelectronics, Gorilla Glass from Corning, and other components from companies including Broadcom, Wolfson, and National Semiconductor. All of these creations and contributions, collectively, enabled the iPhone. They also shaped its improvement path.

We can see this in the iPhone 12, which was released in 2020 and was the company’s first 5G device. Irrespective of Steve Jobs’s brilliance, there was no amount of money that Apple could have spent to release the iPhone 12 in 2008. Even if Apple could have devised a 5G network chip back then, there were no 5G networks for it to use, nor 5G wireless standards through which to communicate to these networks, and no apps that took advantage of its low latency or bandwidth. Were Apple able to make its own ARM-like GPU back in 2008 (more than a decade before ARM itself), game developers (who generate 70% of App Store revenues) would have lacked the game-engine technologies required to take advantage of its superpowered capabilities.

Getting to the iPhone 12 required ecosystem-wide innovation and investments, most of which sat outside Apple’s purview, even though Apple’s lucrative iOS platform was the core driver of these advancements. The business case for Verizon’s 4G networks and American Tower Corporation’s wireless tower buildouts depended on the consumer and business demand for faster and better wireless for apps such as Spotify, Netflix, and Snapchat. Without them, 4G’s “killer app” would have been . . . slightly faster email. Better GPUs, meanwhile, were utilized by better games, and better cameras were made relevant by photo-sharing services such as Instagram. Better hardware powered greater engagement, which drove greater growth and profits for these companies, thereby driving better products, apps, and services.

In Chapter 9, I touched on the ways in which changing consumer habits, rather than just evolving technological capability, enables improvements in both hardware and software. A decade after the iPhone launched, Apple felt confident that it could remove the physical home button and instead ask device owners to return to the home screen and manage multitasking through touch-based swipes from the bottom of the screen. This new design opened up additional space inside the iPhone for more sophisticated sensors and computing components, and helped Apple (and its developers) introduce more complex, software based interaction models. Many video apps began to introduce gestures (for example, two fingers dragging up or down the screen) to increase or decrease volume, rather than require users to pause or litter the screen with unneeded buttons to do so.

A Critical Mass of Working Pieces

With electrification and mobile in mind, we can confidently say only that the Metaverse will not suddenly arrive. There will be no clear “before Metaverse” and “after Metaverse”—only the ability to look back at a point in history when life was different. Some executives argue that we have already passed this threshold with the Metaverse. Their argument feels premature. Fewer than one in fourteen people today routinely engage with the virtual world—and these virtual worlds are almost exclusively games, have no meaningful interconnection (if any at all), with only marginal influence over society at large.

But something is happening. There is a reason why even the executives who think the Metaverse remains far off in the future, such as Zuckerberg, Sweeney, and Huang, believe now is the time to publicly commit to making it (a virtual) reality. As Sweeney has said, Epic Games has “had metaverse aspirations for a very, very long time. It started with text chat in realtime [sic] 3D with 300-polygon strangers. But only in recent years have a critical mass of working pieces started coming together rapidly.”

These pieces include the proliferation of affordable mobile computers with high-resolution touch displays that are only a few inches away from two-thirds of everyone on earth over the age of 12. What’s more, these devices are equipped with CPUs and GPUs capable of powering and rendering complex real-time rendered environments with dozens of concurrent users, each one steering their own avatar and capable of a wide range of actions. This functionality is furthered by 4G mobile chipsets and wireless networks that enable users to access these environments from wherever they are. The advent of programmable blockchains, meanwhile, has offered both the hope and mechanisms of harnessing the combined might and resources of every person and computer on earth to build not just the Metaverse, but a decentralized and healthy one.

Another piece is “cross-platform gaming,” which has enabled users to play one another even if they use different operating systems (referred to as “cross-play”), buy virtual goods and currencies through any platform and then use them on another (cross-purchase), and carry their save data and in-game history across platforms (cross-progression). These sorts of experiences have been technically possible for nearly two decades but were only enabled by the major gaming platforms (most notably, PlayStation) in 2018.

Cross-platform was essential in three ways. First, the very notion of a virtual persistent simulation that exists in the cloud is at odds with device-specific limitations. If the operating system you’re using alters what you can see or do in “the Metaverse” and perhaps blocks you from visiting it altogether, there can be no “Metaverse” nor parallel plane of existence—instead, only software running on your device that lets you peer into one of several virtual realities. Second, the ability to use any device and interact with any other user led to a surge in engagement—just imagine how much less you might use Facebook if you had a different account with different friends and different photos on your PC versus on your iPhone, and if you could only message those who were using the same device as you. If the digital era has been defined by network effects and Metcalfe’s Law, then the enablement of cross-platform play instantly made these virtual worlds more valuable by joining together their forked networks. Third, this increased engagement had a disproportionate impact on those building virtual worlds. Almost all of the costs to build a game, avatar, or item on Roblox, for example, are up-front and fixed. As a result, any increase in player spending dramatically increased an independent developer’s profits, and thus their ability to reinvest in better or more games, avatars, and items.

We can also observe cultural changes. From its launch in 2017 through to the end of 2021, Fortnite generated an estimated $20 billion in revenue, the majority of which was from sales of digital avatars, backpacks, and dances (also known as “emotes”). Fortnite made Epic Games one of the largest sellers of fashion in the world, outgrossing giants such as Dolce & Gabbana, Prada, and Balenciaga by multiples, while also revealing that even “shooter” games were no longer just “games.” The rise of NFTs throughout 2021, meanwhile, started to normalize the idea that purely virtual objects could be worth millions of dollars or more.

Relatedly, we should consider the ongoing destigmatization of time spent in virtual worlds, as well as the ways in which the COVID-19 pandemic accelerated this process. For decades, “gamers” have been making “fake” avatars and spending their free time in digital worlds while pursuing non-game-like objectives such as designing a room in Second Life, rather than killing a terrorist in Counter-Strike. A huge portion of society viewed such efforts as weird or wasteful or antisocial (if not worse). Some saw virtual worlds as the modern version of an adult man building a train set alone in his basement. Virtual weddings and funerals, which have been regular occurrences since the 1990s, were thought of as utterly absurd by most people—more of a punchline than something rather poignant.

It’s difficult to imagine what could have more rapidly changed our perceptions of virtual worlds than time spent at home during the various COVID-19 lockdowns of 2020 and 2021. Millions of skeptics have now participated in (and enjoyed) virtual worlds and activities such as Animal Crossing, Fortnite, and Roblox as they sought out things to do, attended events once planned for the real world, or tried to spend time with their kids indoors. Not only have these experiences helped to destigmatize virtual life for society at large, they may even lead to another (older) generation participating in the Metaverse.*

The compounding impact of two years inside was profound. At the simplest level, the developers of virtual worlds benefited from more revenues, which in turn led to more investment and better products, thereby attracting more users and usage, thus more revenues, and so on. But as virtual worlds were destigmatized and it became clear that everyone was a gamer, rather than just 13- to 34-year-old single men, the world’s largest brands began to flock to the space and in doing so, further legitimize and diversify it. By the end of 2021, automotive giants (Ford), physical fitness brands (Nike), nonprofits (Reporters Sans Frontières), musicians (Justin Bieber), sports stars (Neymar Jr.), auction houses (Christie’s), fashion houses (Louis Vuitton), and franchises (Marvel) had all made the Metaverse a key part of their business—if not the center of their growth strategy.

The Next Drivers of Growth

What are the next “critical pieces” that might lead “Metaverse revenues” or “Metaverse adoption” to surge? One answer might be regulatory action against companies such as Apple and Google that forces them to unbundle their operating systems, software stores, payment solutions, and related services, and in doing so, compete individually in each area. Another popular answer is that we are waiting on an AR or VR headset that, like the iPhone, opens up the device category to hundreds of millions of consumers and many thousands of developers. Still more answers include blockchain-based decentralized computing, low-latency cloud computing, and the establishment of a common and widely adopted standard for 3D objects. Time will eventually reveal the truth, but for the foreseeable future, we can bet on three major drivers.

First, each of the underlying technologies required for the Metaverse is improving on an annual basis. Internet service becomes more widely available, faster, and less latent. Computing power, too, is more widely deployed, capable, and less costly. Game engines and integrated virtual world platforms are becoming easier to use, cheaper to build on, and more capable. The long process of standardization and interoperability is under way, driven in part by the success of integrated virtual world platforms and the crypto movement, but also by economic incentives. Payments, too, are slowly opening up through a mixture of regulatory action, lawsuits, and blockchains. Remember, Sweeney’s “critical mass of working pieces” is not static, but constantly “coming together.”

The second driver is the ongoing march of generational change. At the start of this book, I discussed the relevance of the “iPad-native” generation to the rise of Roblox. This group grew up expecting the world to be interactive—to be affected by their touch and their choices—and now that they can consume, the prior generations can see how different their behaviors and preferences are from that of older people. This is not new, of course. Depending on your own generational identity, you might have grown up sending postcards, spending hours each day after school talking on the phone, using instant messaging apps, or posting photographs on an online social network. The trajectory is clear. We know Generation Y games more than Gen X, Z more than Y, and Alpha more than Y. More than 75% of American children game on a single platform, Roblox. In other words, nearly everyone born today is a gamer. Which means 140 million new gamers are born globally each year.

The third driver is a result of how the first and second come together. Ultimately, the Metaverse will be ushered in through experiences. Smartphones, GPUs, and 4G didn’t magically produce dynamic, real-time rendered virtual worlds—they needed developers and their imaginations. Note, too, that as the generation of “iPad-natives” ages, more people within it will shift from being consumers of or amateur hobbyists in virtual worlds to professional developers and business leaders in their own right.

* I see a number of similarities here to online groceries. Millions of consumers have known about online grocery service for years but refused to try it, even if they regularly bought clothes or toilet paper online. These holdouts simply believed that if someone else picked their groceries, they’d arrive spoiled, damaged, or in some indescribable way, just be “wrong.” And there was no amount of marketing or endorsements to overcome this hesitancy. But the COVID-19 pandemic prompted many people to use grocery delivery for the first time, leading to the realization that online groceries are fine and the process is not just easy but nice. Some will go back to buying in person, but not all, nor all of the time.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!