Chapter 3

A Definition (Finally)

WITH THE IMPORTANT PRELIMINARIES BEHIND US, we can begin to talk concretely about what the Metaverse is. While there are competing definitions and a great deal of confusion, I believe it is possible to offer a clear, comprehensive, and useful definition of the term, even at this early point in the history of the Metaverse.

Here, then, is what I mean when I write and speak about the Metaverse: “A massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

This chapter unpacks each element of this definition and in doing so explains not just the Metaverse, but how the Metaverse differs from today’s internet, what will be needed to realize it, and when it might be achieved.

Virtual Worlds

If there’s any aspect of the Metaverse on which everyone—from believers to skeptics and even those barely familiar with the term—can agree, it’s that it is based on virtual worlds. For decades, the primary reason to build a virtual world was for a video game, such as The Legend of Zelda or Call of Duty, or as part of a feature film, such as those of Disney’s Pixar or for Warner Bros.’ The Matrix. This is why the Metaverse is often misdescribed as a game or entertainment experience.

Virtual worlds refer to any computer-generated simulated environment. These environments can be in immersive 3D, 3D, 2.5D (also known as isometric 3D), 2D, layered atop the “real world” via augmented reality, or purely text-based, as in the game-like MUDs and non-game-like MUSHs of the 1970s. These worlds can have no individual user—as in the case of a Pixar film, or when virtually simulating an ecosphere for a biology class. In other cases, they might be limited to a single user, as when playing Legend of Zelda, or be shared with many others, as in Call of Duty. These users might affect and be affected by this virtual world through any number of devices, such as a keyboard, motion sensor, or even a camera that tracks their motion.

Stylistically, virtual worlds can reproduce the “real world” exactly (these are often called a “digital twin”) or represent a fictionalized version of it (such as Super Mario Odyssey’s New Donk City, or the quarter-scale Manhattan of PlayStation’s 2018 game Marvel’s Spider-Man), or represent an altogether fictional reality in which the impossible is commonplace. The purpose of a virtual world can be “game-like,” which is to say there is an objective such as winning, killing, scoring, defeating, or solving, or the purpose can be “non-game-like” with objectives such as educational or vocational training, commerce, socializing, meditation, fitness, and more.

Perhaps surprisingly, most of the growth and popularity in virtual worlds over the past decade has been in those which either lack or downplay game-like objectives. Consider the best-selling game made exclusively for the Nintendo Switch platform. You might guess that I’m referring to 2017’s The Legend of Zelda: Breath of the Wild or Super Mario Odyssey, both of which are frequently thought of as among the greatest games ever made and part of the most popular video game franchises in history. But neither title wears the crown. Instead, the victor is Animal Crossing: New Horizons, which comes from a celebrated and popular franchise, has been available for purchase less than a third as long as the other two Nintendo titles, yet outsold them by nearly 40%. While Animal Crossing: New Horizons is nominally a game, its actual gameplay has often been likened to a virtual form of gardening. There are no explicit goals, least of all something to win. Instead, users gather and craft items on a tropical island, foster a community of anthropomorphic animals, and trade decorative wares and creations with other players.

In recent years, the biggest uptick in virtual world creation has been via worlds which have no “gameplay” whatsoever. For example, a digital twin of the Hong Kong International Airport was created using the popular game engine Unity—the purpose of the twin was to simulate the flow of passengers, the implications of maintenance issues or runway backups, and other events that would impact airport design choices and operational decision-making. In other cases, entire cities have been re-created and then connected to real-time data feeds for vehicular traffic, weather, and other civic services, such as police, fire, ambulance response. The goal of such a digital twin is to enable city planners to better understand the cities they manage and make more informed decisions about zoning, construction approvals, and more. For example, how would a new commercial mall affect travel times for emergency medical or police services? How might a specific building design adversely affect wind conditions, urban temperatures, or downtown light? Virtual worlds can prove an essential aid.

Virtual worlds can have a single or many different creators, they can be professional or amateur, for-profit or not-for-profit. However, their popularity has surged as the cost, difficulty, and time required to create them has plummeted, in turn leading to increased numbers of virtual worlds and greater diversity among and within them. Adopt Me!, a Roblox-based experience, was developed by only two independent and otherwise inexperienced people in the summer of 2017. Four years later, the game had nearly 2 million players at a single time (The Legend of Zelda: Breath of the Wild has sold roughly 25 million copies in its lifetime), and by the end of 2021, it had been played more than 30 billion times.

Some virtual worlds are fully persistent, which means everything that happens inside them is permanent. In other cases, the experience is reset for each player. More often, a virtual world operates somewhere in the middle. Consider the famous 2D sidescrolling game Super Mario Bros., released in 1985 for the Nintendo Entertainment System. The first level lasts no longer than 400 seconds. If the player dies before then, they might have an extra life which enables them to retry it, but the level’s virtual world will have been fully reset as though the player had never been there before—that is, all enemies that were killed are returned to life, and all items restored. However, Super Mario Bros. also allows some items to persist. A player who dies in level 3–4 retains the coins collected during prior levels, as well as their progress in the game—until they run out of all their lives, after which all data is reset.

Some virtual worlds are limited to a specific device or platform. Examples here include Legend of Zelda: Breath of the Wild, Super Mario Odyssey, and Animal Crossing: New Horizons, which are available exclusively on Nintendo’s Switch. Others operate on several platforms, such as Nintendo’s mobile games, which run on most Android and iOS devices, but not the Nintendo Switch or any other consoles. Some titles are considered fully cross-platform. In 2019 and 2020, Fortnite was available on all of the major gaming consoles (e.g., Nintendo’s Switch, Microsoft’s Xbox One, Sony’s PlayStation 4), PC devices (i.e., those running Windows or Mac OS), as well as the top mobile platforms (iOS and Android).* This meant that a single player could access the title, their account, and their owned goods (for instance, a virtual backpack or outfit) from nearly any device. In other cases, titles are nominally available on multiple platforms, but the experiences are disconnected. Call of Duty Mobile and the PC/console-only Call of Duty Warzone share select account information and are both battle royale games with similar maps and mechanics, but are otherwise different games and players in one virtual world cannot play against players in the other.

As with the real world, the governance models of virtual worlds vary greatly. Most are centrally controlled by the person or group that developed and operates the world, which means they have unilateral control over its economy, policies, and users. In other instances, users self-govern through various forms of democracy. Some blockchain-based games aspire to operate as close to autonomously as possible after launch.

3D

Although virtual worlds come in many dimensions, “3D” is a critical specification for the Metaverse. Without 3D, we might as well be describing the current internet. Message boards, chat services, website builders, image platforms, and interconnected networks of content have been around and popular for decades, after all.

3D is necessary not just because it signals something new. Metaverse theorists argue that 3D environments are required in order to make possible the transition of human culture and labor from the physical world to the digital one. For example, Mark Zuckerberg has claimed that 3D is an inherently more intuitive interaction model for humans than 2D websites, apps, and video calls—especially in social use cases. Certainly, humans did not evolve for thousands of years to use a flat touchscreen.

We must also consider the nature of online communities and experiences over the last several decades. In the 1980s and early 1990s, the internet was mostly text-based. An online user represented their identity via username or email address, and a written profile, and expressed themselves via chat rooms and message boards. In the late 1990s and early 2000s, PCs became capable of storing larger files, while internet speeds made it practical to upload and download them. Accordingly, most internet users began to represent themselves online through display/profile pictures, as well as personal websites that included a handful of low-resolution images and sometimes even audio clips. Eventually, this led to the emergence of the first mainstream social networks, such as MySpace and Facebook. By the late 2000s and early 2010s, altogether new forms of online socializing began to emerge. Gone were the days of infrequently updated personal blogs or Facebook pages comprising a single cover photo and a string of old, text-only status updates. Instead, users expressed themselves through a near-constant stream of high-resolution photos and even videos—many of which were taken on the go and for no purpose other than to share what they were doing, eating, or thinking at a given moment. Again, this was led by altogether new social media networks such as YouTube, Instagram, Snapchat, and TikTok.

This history provides a few lessons. First, humans seek out digital models that most closely represent the world as they experience it—richly detailed, mixing audio and video, and with a sense of being “live” rather than static or outdated. Second, as our online experiences become more “real,” we place more of our real lives online, live more of our lives online, and human culture overall becomes more affected by the online world. Third, the leading indicator for this change is typically new social apps, which, more often than not, are first embraced by younger generations. Collectively, these lessons seem to support the notion that the next great step for the internet is 3D.

If this is indeed the case, we can imagine how a “3D internet” might finally disrupt industries that have largely resisted digital disruption. For decades, futurists have predicted that education, most notably post-secondary education and vocational training, would be partly displaced by remote, online schooling. Instead, the cost of traditional, in person education has continued to rise (and at orders of magnitude above the average rate of inflation), while applications to colleges and universities continue to surge—even though the experience remains mostly unchanged. None of the most prestigious schools in the world have even tried to launch remote education programs that aspire to match the quality or imprimatur of their in-person equivalent, in part because employers seem unlikely to recognize them as such. And for millions of parents worldwide, the COVID-19 pandemic was a lesson on the inadequacy of children learning alone via 2D touchscreen. Many imagine that the improvements to 3D virtual worlds and simulations, as well as VR and AR headsets, will fundamentally reshape our pedagogical practices. Students from around the world will be able to strap into a virtual classroom, sit alongside their peers while making eye contact with their teacher, then shrink down to blood cells which travel through a human circulatory system, after which these previously 15-micrometer-tall students re-enlarge and dissect a virtual cat.

It is important to emphasize that while the Metaverse should be understood as a 3D experience, this does not mean that everything inside the Metaverse will be in 3D. Many people will play 2D games inside the Metaverse, or use the Metaverse to access software and applications that they then experience using mobile-era devices and interfaces. In addition, the advent of the 3D Metaverse does not mean that the entirety of the internet and computing at large will transition to 3D; the mobile internet era started more than a decade and a half ago, and yet many still use non-mobile devices and networks. Moreover, data transmitted between two mobile devices is still primarily transmitted through wired (i.e., underground) internet infrastructure. And despite the proliferation of the internet over the past 40 years, there are still offline networks and networks using proprietary protocols. However, it is 3D that enables so many new experiences to be built on the internet—and that creates the extraordinary technical challenges described next.

I should also note that no part of the Metaverse requires an immersive virtual reality or VR headset. These may eventually be the most popular way to experience the Metaverse, but immersive virtual reality is just one way to access it. Arguing that immersive VR is a requirement for the Metaverse is similar to arguing that the mobile internet can only be accessed via apps, thereby excluding mobile browsers. In truth, we don’t even need a screen to access mobile data networks and mobile content, as is often the case with vehicular tracking devices, select headphones, and countless machine-to-machine and internet of things (IoT) devices and sensors. (The Metaverse won’t require screens either, by the way—more on this in Chapter 9.)

Real-Time Rendered

Rendering is the process of generating a 2D or 3D object or environment using a computer program. The goal of this program is to “solve” an equation made up of many different inputs, data, and rules that determine what should be rendered (that is, visualized) and when, and by using various computing resources, such as a graphics processing unit (or GPU) and central processing unit (CPU). As is the case with any math problem, an increase in the resources available to solve it (in this case, time, the number of CPUs/GPUs, and processing power) means that more complex equations can be tackled, and more detail provided in the solution.

Take the 2013 film Monsters University. Even when using an industrial grade computing processor, it would have taken an average of 29 hours for each of the film’s 120,000-plus frames to be rendered. In total, that would have meant more than two years just to render the entire movie once, assuming not a single render was ever replaced or scene changed. With this challenge in mind, Pixar built a data center of 2,000 conjoined industrial-grade computers with a combined 24,000 cores that, when fully assigned, could render a frame in roughly seven seconds.1 Most companies, of course, can’t afford such a powerful supercomputer and therefore spend more time waiting. Many architectural and design firms, for example, need to wait overnight to render a highly detailed model.

Prioritizing visual fidelity is sensible if you’re creating a Hollywood blockbuster that will be shown on an IMAX screen, or when you’re selling a multi-million-dollar building renovation. However, experiences set in virtual worlds require real-time rendering. Without real-time rendering, the size and visuals of virtual worlds would be severely constrained, as would the number of participating users and the options available to each user. Why? Because experiencing an immersive environment through pre-rendered images requires every possible sequence to have been pre-made—just as a choose-your-own-adventure novel can offer only a handful of choices, rather than infinite ones. In other words, the cost of greater visuals is less functionality and agency.

Compare, for example, navigating the Roman Colosseum in a video game versus doing the same on Google Street View. Both provide 360-degree views and multiple dimensions of movement (look up or down, move left or right, backward or forward), but the former severely limits one’s choices—and if you decide to look closely at a given stone, all you can do is zoom into an image not designed for such scrutiny. It will be blurry, and the view angle is fixed.

Although real-time rendering enables a virtual world to be “alive” and respond to input from a user (or a group of users, for that matter), it means that a minimum of 30, and ideally 120 frames, must be rendered each second. This constraint necessarily affects which and how much hardware is used and for how many cycles, and thereby the complexity of what’s rendered. As you might expect, immersive 3D requires far more intensive computing power than 2D. And just as the average architectural firm cannot contend with the supercomputers built by a Disney subsidiary, the average user can’t afford the GPUs or CPUs used by a corporation.

Interoperable Network

Central to most visions of the Metaverse is the user’s ability to take her virtual “content,” such as an avatar or a backpack, from one virtual world to another, where it might also be changed, sold, or remixed with other goods. For example, if I buy an outfit in Minecraft I might then wear it in Roblox, or perhaps a hat I purchased in Minecraft would be paired with a sweater I won in Roblox while attending a virtual sporting match developed and operated by FIFA. And if attendees of the match received an exclusive item at this event, they could take it with them from that environment into others, and even sell it on third party platforms as though it were an original 1969 Woodstock T-shirt.

In addition, the Metaverse should make it so that wherever a user goes or whatever they choose to do, their achievements, history, and even finances are recognized across multitudes of virtual worlds, as well as the real one. The closest analogues are the international passport system, local market credit scores, and the national identification systems (such as social security numbers).

To realize this vision, virtual worlds must first be “interoperable,” a term that refers to the ability for computer systems or software to exchange and make use of information sent from one another.

The most significant example of interoperability is the internet, which enables countless independent, heterogeneous, and autonomous networks can safely, reliably, and comprehensibly exchange information globally. All of this is made possible by the adoption of the Internet Protocol Suite (TCP/IP), a set of communications protocols that tell disparate networks how data should be packetized, addressed, transmitted, routed, and received. This suite is managed by the Internet Engineering Task Force (IETF), a nonprofit open standards group established in 1986 under the US federal government (it has since become a fully independent and global body).

The establishment of TCP/IP did not alone produce the globally interoperable internet as we know it today. We say “the internet,” rather than “an internet,” and choose to use “the internet” over no practical alternatives, because nearly every computer network globally, from small-to-medium businesses and broadband providers, as well as device manufacturers and software companies, voluntarily embraced the Internet Protocol Suite.

In addition, new working bodies were established to ensure that, no matter how large and decentralized the internet and World Wide Web might become, it would continue to interoperate. These bodies managed the assignment and expansion of top-level hierarchical web domains (.com, .org, .edu), as well as IP addresses, which distinctly identify individual devices on the internet, the Uniform Resource Locator (or URL), which specifies the location of a given resource on a computer network, and HTML.

Also important was the establishment of common standards for files on the internet (e.g., JPEG for digital images and MP3 for digital audio), common systems for presenting information on the internet that were built on linkages between different websites, webpages, and web content (such as HTML), and browser engines that could render this information (Apple’s WebKit). In most cases, several competing standards were established, but technical solutions emerged to convert from one another (for example, a JPEG to a PNG). Due to the openness of the early web, most of these alternatives were open sourced and sought the broadest possible compatibility. Today, a photo taken on an iPhone can easily be uploaded to Facebook, then downloaded from Facebook to Google Drive, then posted to an Amazon review.

The internet demonstrates the scope of the systems, technical standards, and conventions required to establish, maintain, and scale interoperability across heterogenous applications, networks, devices, operating systems, languages, domains, countries, and more. Yet far more still will be needed to realize visions of an interoperable network of virtual worlds.

Almost all the most popular virtual worlds today use their own different rendering engines (many publishers operate several across their titles), save their objects, textures, and player data into entirely different file formats and with only the information that they expect to need, and have no systems through which to even try to share data to other virtual worlds. As a result, existing virtual worlds have no clear way to find and recognize one another, nor do they have a common language in which they can coimmunicate, let alone coherently, securely, and comprehensively.

This isolation and fragmentation stems from the fact that today’s virtual worlds, and their builders, never designed their systems or experiences to be interoperable. Instead, they were intended to be closed experiences with controlled economies—and optimized accordingly.

There is no obvious or fast way to establish standards and solutions. Consider, for example, the idea of an “interoperable avatar.” It’s relatively easy for developers to agree on the definition of an image and how to present it, and as a static 2D unit of content made up of individually colored pixels, the process of converting one image filetype (say, PNG) to another (JPEG) is straightforward. However, 3D avatars are a more complex question. Is an avatar a complete 3D person with an outfit, or is it made up of a body avatar plus an outfit? If the latter, how many articles of clothing are they wearing and what defines a shirt versus a jacket that goes over shirts? Which parts of an avatar can be recolored? Which parts must be recolored together (is a sleeve separated from a shirt)? Is an avatar’s head a complete object, or is it a description of dozens of sub-elements like individual eyes (with their own retinas), eyelashes, noses, freckles, and so on. In addition, users expect an anthropomorphic jellyfish avatar and a box-like android to move in different ways. The same applies to objects. If a tattoo is a placed on the avatar’s neck, it should stay fixed to their skin irrespective of any movement they make. A tie hung around that neck, however, should move with (and also interact with) the avatar as it moves. And it should move differently than a seashell necklace, which should also move differently than a feather necklace. Just sharing the dimensions and visual detail of an avatar is not sufficient. Developers need to understand, and agree on, how they work.

Even if new standards are agreed upon and improved, developers will need code that can properly interpret, modify, and approve third-party virtual goods. If Call of Duty wants to import an avatar from Fortnite, it will likely want to restyle the avatar to fit Call of Duty’s gritty realism. To this end, it might want to reject those that cannot make sense in its virtual world, such as Fortnite’s famous Peely skin, a giant anthropomorphic banana (which probably can’t fit inside Call of Duty’s cars or doorframes).

Other problems need to be resolved, too. If a user purchases a virtual good in one virtual world, but then uses it in many others, where is their ownership record managed and how is this record updated? How does another virtual world request this good on behalf of its supposed owner and then validate that the user owns it? How is monetization managed? Not only are unchangeable images and audio files simpler than 3D goods, but we can send copies of them between computers and networks and, critically, do not need to control how they’re used thereafter and who has the right to use them.

And the above just concerns virtual objects. There are additional and largely unique challenges involved in interoperable identifies, digital communications, and especially payments.

What’s more, we need the standards that are selected to be highly efficient. Consider, as an example, the GIF format. Though it’s popular, it’s awful technically. GIF images are typically very heavy (that is, their file size is relatively large) despite having compressed the source video file to the point that many individual frames are discarded and the remaining frames having lost much of their visual detail. The MP4 format, conversely, is typically five to ten times lighter and provides far greater video clarity and detail. The comparatively widespread use of GIF has therefore led to extra bandwidth use, more time waiting for files to load, and worse experiences overall. This may not seem like a terrible outcome, but as I’ll discuss later in this book, the computational, network, and hardware demands of the Metaverse will be unprecedented. And 3D virtual objects are far heavier, and likely more important, than an image file. Which formats are selected will thus have a profound impact on what is possible, on which devices, and when.

The process of standardization is complicated, messy, and long. It is really a business and human problem masquerading as a technological one. Standards, unlike the laws of physics, are established through consensus, not discovery. Forming consensus often requires concessions that leave no party happy, which can then result in “forks” as different factions break off. Still, the process is never over. New standards are constantly emerging, with old ones updated and sometimes deprecated (we are slowly moving away from GIF). That the 3D standardization process is beginning decades after virtual worlds first emerged, and with trillions of dollars at stake, will make this even harder.

Pointing to these challenges, some argue that it is unlikely that “the Metaverse” will ever happen. Instead, there will be many competing networks of virtual worlds. But this is not an unfamiliar position. From the 1970s through to the early 1990s, there was also constant debate as to whether a common internetworking standard would be established (this period is known as the “Protocol Wars”). Most expected the world and its networks would be fragmented across a handful of proprietary networking stacks that spoke only to select outside networks and only for specific purposes.

In hindsight, the value of a single integrated internet is obvious. Without it, 20% of the world economy would not be “digital” today (nor much of the remainder digitally powered). And while not every company has benefited from openness and interoperability, most businesses and users have. Accordingly, the driving force behind interoperability is unlikely to be a given visionary voice or newly introduced technology, but instead will be economics. And the means of leveraging economics to the greatest degree will be common standards that will enhance the Metaverse economy by attracting more users and more developers, which will lead to better experiences, which in turn will be cheaper to make and more profitable to operate, thereby driving greater investment. It isn’t necessary for all parties to embrace common standards, so long as economic gravity is allowed to do its work. Those who do will grow and those who don’t will face constraints.

It’s for this reason why it’s so important to understand how the interoperability standards of the Metaverse will be established. The leaders here will have extraordinary soft power as this next-generation internet exists. In many ways, they will decide the rules of physics, and when, how, and why they will be updated.

Massively Scaled

For “the internet” to be “the internet”, we generally accept that it has to have a seemingly infinite number of websites and pages. It can’t, for example, just be a handful of portals owned by a few developers. The Metaverse is similar. It must have a massively scaled number of virtual worlds if it is to be “the Metaverse.” Otherwise, it is more like a digital theme park—a destination with a handful of carefully curated attractions and experiences that can never be as diverse as, or contend with, the outside (real) world.

Unpacking the etymology of the term “Metaverse” is helpful here. Stephenson’s neologism comes from the Greek prefix “meta” and the stem “verse,” a backformation of the word “universe.” In English, “meta” roughly translates to “beyond” or “which transcends” the word that follows. For example, metadata is data that describes data, while metaphysics refers to a branch of philosophy “of being, identity and change, space and time, causality, necessity and possibility,” rather than the study of “matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force.”2 In combination, the “meta” and “verse” is intended to be a unifying layer that sits above and across all individual, computer generated “universes,” as well as the real world, just as the universe contains, by some estimates, 70 quintillion planets.

Furthermore, within the Metaverse, there might be “metagalaxies,” a collection of virtual worlds that all operate under a single authority and that are clearly connected by a visual layer. Under this definition, Roblox would be a Metagalaxy, while Adopt Me! would be a virtual world. Why? Because Roblox is a network of millions of different virtual worlds, one of which is Adopt Me!, but Roblox does not contain all virtual worlds (which would make it the Metaverse). Notably, individual virtual worlds might themselves have specific sub-regions, just as networks on the internet have their own sub-networks, and the earth has continents, often comprising many nations, which can be further divided into states and provinces, each containing cities, counties, and so on.

One way to think about a Metagalaxy is to think of Facebook’s role in the internet. Facebook is obviously not the internet, but it is a collection of tightly integrated Facebook pages and profiles. In a simplified sense, it’s today’s version of a 2D Metagalaxy. Analogy also allows us to consider the likely extent of Metaverse interoperability. In today’s universe, not all goods can travel everywhere. We could bring a guitar to Venus, but it would be immediately crushed; we could technically bring an Ohio farm to the moon, but it would be impractical. On earth, most human-made objects can be brought to most human-made places, however, we have various social, economic, cultural, and safety limitations which can get in the way of such efforts.

Growth in the number of virtual worlds should drive increased usage of virtual worlds. Some leaders within the virtual worlds space, such as Tim Sweeney, believe that eventually, every company will need to operate their own virtual worlds, both as standalone planets and as part of leading virtual world platforms such as Fortnite and Minecraft. As Sweeney has put it, “just as every company a few decades ago created a webpage, and then at some point every company created a Facebook page.”

Persistence

Earlier, I discussed the idea of persistence in a virtual world. Almost no current games demonstrate full persistence. Instead, they run for a finite period before resetting part or all of their virtual worlds. Consider the hit games Fortnite and Free Fire. Throughout a match, players build or destroy various structures, set fire to forests, or kill wildlife, but after roughly 20 to 25 minutes, the map effectively “ends” and is discarded by Epic Games and Garena—never to be re-experienced by a player, even if they retain items won or unlocked during that match. In fact, even within a given match, the virtual world discards data, such as a bullet mark on an indestructible rock, which might “unload” after 30 seconds in order to reduce render complexity.

Not all virtual worlds reset like a Fortnite match. World of Warcraft, for example, runs continuously. However, it’s still wrong to say its virtual world fully persists. If a player enters a specific part of the World of Warcraft map, defeats their enemies, leaves, and returns, they’ll more often than not find that those enemies have respawned. An in-game tradesman who sold a player a rare item only a day earlier might offer them a second as though it would be their first. Only when a large update is made by the developer, in this case Activision Blizzard, might a virtual world change. The players cannot themselves affect whether the consequences of a given choice or event endures indefinitely. The only thing that persists is the player’s memory, and their record of having defeated an enemy or bought an item.

The challenge of persistence in virtual worlds can be a bit difficult to grasp because we don’t encounter this problem in the real world. If you cut down a physical tree, it is gone irrespective of whether you personally remember cutting it down, and no matter how many other trees and activities Mother Earth is tracking. With a virtual tree, your device and the server which manages it must actively decide whether to retain this information, render it, and share it with others. And if these computers choose to do so, there are additional questions of detail—is the tree just “gone,” or is it now felled on the ground? Should players see which side it was chopped from, or just that it was generically cut? And does it “biodegrade”? If so, how—generically, or in response to its local environments? The more information that persists, the greater the computational needs and the less memory and power that is available for other activities.

The best example of the computation-persistence interplay comes via the game EVE Online. Though not nearly as famous as other “proto-Metaverses” from the early 2000s, such as Second Life, nor newer ones such as Roblox, EVE Online is a marvel. With the exception of the occasional downtime for troubleshooting and updates, EVE Online has operated continuously and persistently since launching in 2003. And unlike games like Fortnite, which fragments its tens of millions of players into 20- to 30-minute matches of 12 to 150 players, EVE Online places its hundreds of thousands of monthly users into a single, shared virtual world that spans nearly 8,000 star systems and nearly 70,000 planets.

Behind EVE Online’s extraordinary virtual world is an innovative systems architecture—but also (and mostly) brilliant creative design.

The virtual world of EVE Online is essentially just empty three-dimensional space with wallpaper backgrounds that look like a galaxy. Users cannot truly visit a planet, with activities such as mining more akin to setting up a wireless router than constructing a virtual rig. As such, the game’s persistence is mostly about managing a relatively modest set of entitlements (a player’s ships and resources, for example) and related locational data. This means less computational work for CCP Games’ servers, and for its users, whose devices need not render a changed world, just a few objects in it. Recall that complexity is the enemy of real-time rendering.

Furthermore, very little happens in EVE Online on a daily, quarterly, or even yearly basis. This is because the goal of EVE Online, to the extent one exists, is for various factions of players to conquer planets, systems, and galaxies. This is achieved primarily through the establishment of corporations, formation of alliances, and the strategic positioning of fleets. To this end, much of EVE Online actually takes place in the “real world,” via third-party messaging application and emails, and not even on CCP’s servers. Users have spent years planning attacks, going undercover with the enemy guilds in order to later betray them, and creating enormous personal networks that trade resources and construct new ships. While large-scale battles do happen, they’re remarkably rare—and involve the destruction of assets in the virtual world (for example, ships) rather than the virtual world itself. The former is far easier for a processor to manage than the latter—just as throwing a garden plant in the garbage is easier than understanding how it’ll affect the garden’s ecosystem.

What makes EVE Online such an extraordinary example is how profoundly complex it is—both technically and sociologically—yet at the same time how limited compared to most visions of the Metaverse. In Stephenson’s Snow Crash, the Metaverse is an enormous, planet-sized, and richly detailed virtual world with a nearly infinite number of unique businesses, places to visit, activities to do, things to buy, and people to meet. Nearly everything and anything done by any user, at any time, can persist forever. This applies not just to the virtual world, but to the individual items in it. Our avatars and virtual sneakers would wear with use and forever reflect their damage. And per the principles of interoperability, these modifications would persist wherever we go.

The amount of data that must be read, written, synchronized (more on this below), and rendered to create and sustain this experience is not just unprecedented—it is far beyond anything possible today. However, the literal version of Stephenson’s Metaverse may not even be desirable. He imagined individuals waking up in the Metaverse inside their virtual homes, then walking or taking a train to a virtual bar. While skeuomorphism often has utility, “The Street” as a single unifying layer for everything in the virtual world likely does not. Most participants in the Metaverse would rather teleport from destination to destination.

Fortunately, it is far easier to manage the persistence of a user’s data (i.e., what they own and have done) across various worlds and over time, rather than the persistence of every user’s most minute contributions to a planet-sized world. The model also more closely reflects the internet as it exists today—and probably our preferred interaction models, too. On the web, we often navigate directly to a webpage, such as a specific document in Google Docs or a video on YouTube. We don’t start at some sort of “internet homepage,” then click through to Google.com, then navigate to the appropriate product page, and so on.

Furthermore, the internet persists irrespective of any one site, platform, or top-level domain such as “.com.” Should one site, or even many sites, cease to exist, content might be lost but the internet, as a whole, would persist. Much of a user’s data, such as cookies or an IP address, not to mention the content they’ve created, can exist without a given website, browser, devices, platform, or service. If a virtual world goes offline, resets, or shuts down, however, it is almost as though it never existed for the player. Even if it continues to operate, the moment a player stops playing within a world, the virtual goods they own, their history and achievements, and even parts of their social graph are likely lost. This is less of a problem when virtual worlds are “games,” but for human society to shift in a meaningful way into virtual spaces (i.e., for education, work, healthcare), what we do in these spaces must reliably endure, just as our grade school reports and baseball trophies do. To philosophers including John Locke, identity is better understood as continuity of memory. If so, then we can never have a virtual one as long as everything we do and have done is forgotten.

Increasing persistence within individual virtual worlds will nevertheless be essential to the growth of the Metaverse. As I’ll discuss throughout the rest of this book, many of the design ideas that have become popular over the past five years are not new, but rather newly possible. As such, we may currently struggle to figure out why World of Warcraft might need to forever remember a user’s exact footprints in fresh snow, but odds are some designer will eventually figure out the answer and not long after, it will become a core feature of many games. Until then, the virtual worlds most in need of persistence are likely those based around virtual real estate, or tied to physical spaces. For example, we expect that “digital twins” should be frequently updated to reflect changes to their real-world counterpart, and that virtual-only real estate platforms would not “forget” about new art or décor added to a given room.

Synchronous

We don’t want virtual worlds in the Metaverse to merely persist or respond to us in real time. We also want them to be shared experiences.

For this to work, every participant in a virtual world must have an internet connection capable of transmitting large volumes of data in a given time (“high bandwidth”), as well as a low latency (“fast”) and continuous (sustained and uninterrupted) connection to a virtual world’s server (both to and from).

This might not seem like an outlandish requirement. After all, tens of millions of homes are probably streaming high-definition video at this moment, while much of the global economy ran via live and synchronous video conference software throughout the COVID-19 pandemic. And broadband providers continue to boast about—and deliver—improvements in bandwidth and latency, with internet outages becoming less common each day.

However, synchronous online experiences are perhaps the greatest constraint facing the Metaverse today—and the one that is hardest to solve. Simply put, the internet was not designed for synchronous shared experiences. It was designed, instead, to allow for the sharing of static copies of messages and files from one party to another (namely research labs and universities that accessed them one at a time). Though this sounds impossibly limiting, it works pretty well for almost all online experiences today—specifically because almost none require continuous connectivity to feel live, or, well, continuous!

When a user believes they’re surfing a live webpage, such as their constantly updating Facebook Newsfeed, or the Live Election feed from the New York Times, they’re really just receiving frequently updated pages. What’s actually happening is the following. To start, that user’s device is making a request to Facebook or the Times’ server, either via a browser or app. The server then processes the request, and sends back the appropriate content. This content includes code that requests updates from the server on a given interval (say, every 5 or 60 seconds). Furthermore, every one of these transmissions (from the user’s device or that of the relevant server) might travel across a different set of networks to reach its recipient. While this feels like a live, continuous, and two-way connection, it’s actually just batches of one-way, varyingly routed, and non-live data packets. The same model applies to what we call “instant messaging” applications. Users, and the servers between them, are really just pushing fixed data to one another, while frequently pinging for information requests (sending a message or sending a read receipt).

Even Netflix operates on a noncontinuous basis, even though the term “streaming” and target experience—uninterrupted playback—suggest otherwise. In truth, the company’s servers are sending users distinct batches of data, many of which travel through different network paths from the server to that user. Netflix is often even pushing content to the user before it’s needed—such as an extra 30 seconds. Should a temporary delivery error occur (say, a specific pathway is congested, or the user briefly loses their Wi-Fi connection), the video will continue to play. The result of Netflix’s approach is delivery which feels continuous, but only because it isn’t delivered as such.

Netflix has other tricks, too. For example, the company receives video files anywhere from months to hours before they’re made available to audiences. This gives the company a window during which it can perform extensive, machine learning–powered analysis that enables them to shrink (or “compress”) file sizes by analyzing frame data to determine what information can be discarded. Specifically, the company’s algorithms will “watch” a scene with blue skies and decide that, if a viewer’s internet bandwidth suddenly drops, 500 different shades of blue can be simplified to 200, or 50, or 25. The streamer’s analytics even do this on a contextual basis—recognizing that scenes of dialogue can tolerate more compression than those of faster-paced action. In addition, Netflix will pre-load content at local nodes. When you ask for the newest episode of Stranger Things, it’s actually only a few blocks away and therefore arrives right away.

The approaches used above only work because Netflix is a nonsynchronous experience; you can’t “pre-do” anything for content that is being produced live. This is why live video streams, such as those of CNN or Twitch, are substantially less reliable than on demand streams from Netflix or HBO Max. But even live streamers have their own tricks. For example, transmission is typically delayed by two to thirty seconds, which means there’s still the opportunity to pre-send content in case of temporary congestion. Commercial breaks can also be used by both the content provider’s server, or the user, to reset the connection in case the prior one proved unreliable. Most live video requires only a one-way continuous connection—for instance, from CNN’s server to the user. Sometimes there is a two-way connection, as in the case of a Twitch chat, but only a sparse amount of data is being shared (the chat itself) and it’s not of critical importance—as it does not directly affect what’s happening in the video (remember, it probably happened two to thirty seconds earlier).

Overall, very few online experiences require high bandwidth, low latency, and continuous connectivity other than real-time rendered, multiuser virtual worlds. Most experiences just need one or, at most, two of these elements. High-frequency stock traders (and especially high-frequency trading algorithms) want the lowest possible delivery times, as this can be the difference between buying or selling a security at a profit or a loss. However, the orders themselves are basic and lightweight, and don’t require a continuous server connection.

The major exception is videoconferencing software such as Zoom, Google Meet, or Microsoft Teams, which involves many people receiving and sending high-resolution video files, all at once, and participating in a shared experience. However, these experiences are only possible through software solutions that don’t really work for real-time rendered virtual worlds with many participants.

Think back to your last Zoom call. Every now and then, a few packets likely arrived too late or perhaps not at all, meaning you never heard a word or two—or perhaps, a few of your words were never heard by others on the call. Despite this, odds are you or your listeners still understood what was being said and the call could proceed. Maybe you temporarily lost, but then quickly regained, connectivity. Zoom can send you the packets you missed, then speed up playback and edit out pauses in order to “catch you up” to being “live.” It’s possible you lost your connection altogether, either due to a problem with your local network or through a problem encountered somewhere between your local network and a remote Zoom server. If this happened, you probably rejoined without anyone knowing you left—and if they did, it’s unlikely your absence was disruptive. This is because videoconferences are shared experiences that focus on a single person, rather than a shared one that is led by many users working together. What if you were the speaker? The good news here is that the call could continue well enough without you, with either another participant piping up or everyone waiting for you to rejoin. If at any time network congestion meant that you or others simply could not hear or see what was happening, Zoom will stop uploading or downloading video from various members of the call, in order to prioritize what mattered most: audio. Or alternatively, the call might have been disrupted by varying latency—that is, different members of the call were receiving “live” video and audio a quarter, half, or even full second behind or in front of one another—resulting in struggles to take turns speaking and constant interruptions. Eventually, your call probably figured out how to manage this. Everyone just needs a little patience.

Virtual worlds have higher performance requirements and are more affected by even the slightest of hiccups than any of these activities. Far more complex data sets are being transmitted, and they’re needed on a far timelier basis and from all users.

Unlike a video call, which effectively has one creator and several spectators, a virtual world typically comprises many shared participants. Accordingly, loss of any one individual (no matter how temporary) affects the entire collective experience. And even if a user isn’t lost altogether, but instead falls slightly out of sync with the rest of the call, they lose their ability to affect the virtual world altogether.

Imagine playing a first-person shooter game. If Player A lags 75 milliseconds behind Player B, they might shoot in a location where they believe Player B to be, but that Player B and the game’s server know Player B has already left. This discrepancy means the virtual world’s server must decide whose experiences are “true” (that is, which should be rendered and persist across all participants) and whose experiences must be rejected. In most cases the experience of the participant who lagged will be rejected so that the other participants can proceed. The Metaverse can’t really function as a parallel plane for human existence if many of those within it experience conflicting (and then invalidated) versions of it.

The computational constraints around the number of users per simulation (which I’ll discuss in the next section) often means that if a user disconnects from a given session, they can never rejoin it, either. This disrupts not just that user’s experience, but also that of their friends, who must exit the virtual world if they want to resume play together, or otherwise continue without him or her.

In other words, latency and lags might frustrate individual Netflix and Zoom users, but in a virtual world, these problems place the individual at risk of virtual death and the collective in a state of constant frustration. As of this writing, only three-quarters of American households can consistently participate in most real-time rendered virtual worlds. Fewer than one-quarter of households in the Middle East can.

This extended description of the challenge of synchronicity is critical to understanding how the Metaverse will evolve and grow over the coming decades. Although many consider the Metaverse to be reliant upon innovations in devices, such as VR headsets, game engines (such as Unreal), or platforms like Roblox, networking capabilities will define—and constrain—much of what’s possible, when, and for whom.

As we’ll review in later chapters, there are no simple, inexpensive, or quick solutions. We will need new cabling infrastructure, wireless standards, hardware equipment, and potentially even overhauls to foundational elements of the Internet Protocol Suite, such as the Border Gateway Protocol.

Most people have never heard of BGP, but this protocol is everywhere around us, serving as a sort of traffic guard of the digital era by managing how and where data is transmitted across various networks. The challenge with BGP is that it was designed for the internet’s original use case of sharing static, asynchronous files. It does not know, let alone understand, what data it’s transmitting (be it an email, a live presentation, or a set of inputs intended to dodge virtual gunfire in a real-time rendered virtual simulation), nor its direction (inbound or outbound), the impact of encountering network congestion, and so on. Instead, BGP follows a fairly standardized one-size-fits-all methodology for routing traffic, which essentially weighs the shortest path, the fastest past, and the cheapest path (with a general preference for the last variable). Thus, even if a connection is sustained, it could be an unnecessarily long (latent) one—and could be severed in order to prioritize network traffic that didn’t need to be delivered in real time.

BGP is managed by the Internet Engineering Task Force and can be revised. However, the viability of any changes depends on opt-in from thousands of different internet service providers, private networks, router manufacturers, content delivery networks, and more. Even a substantial update is likely to be insufficient for a globally scaled Metaverse—at least in the near future.

Unlimited Users and Individual Presence

Although Stephenson did not provide an exact date, various references in Snow Crash suggest the novel takes place in the mid-to-late 2010s. Stephenson’s Metaverse, which was roughly two-and-a-half times the size of earth, was “occupied by twice the population of New York City”3 at any given time. In total, 120 million of the roughly eight billion people who lived in Stephenson’s fictional “real world” had access to computers powerful enough to handle the Metaverse’s protocol and could join whenever they liked. In our real world, we are nowhere close to achieving the same.

How far are we? Even nonpersistent virtual worlds that are less than ten square kilometers in surface area, severely constrained in functionality, operated by the most successful video game companies in history, and running on even more powerful computing devices still struggle to sustain more than 50 to 150 users in a shared simulation. What’s more, 150 concurrent users (CCUs) is a significant achievement, and only possible because of how these titles are creatively designed. In Fortnite: Battle Royale, up to 100 players can participate in a richly animated virtual world, and each player controls a detailed avatar that can use more than a dozen different items, perform dozens of dances and maneuvers, and build complex structures tens of stories tall. However, Fortnite’s roughly 5 km2 map means that only one dozen to two dozen players will run across one another at once—and by the time players are forced into a smaller portion of the map, most players have been eliminated and turned into data on a scoreboard.

The same technical limitations shape Fortnite’s social experiences, such as its famous 2020 concert with Travis Scott. In that case, “players” converged on a much smaller portion of the map, meaning the average device had to render and compute far more information. Accordingly, the title’s standard cap of 100 players per instance was halved, while many items and actions, such as building, are disabled, thereby further reducing the workload. While Epic Games can rightly say that more than 12.5 million people attended this live concert, these attendees were split across 250,000 separate copies (meaning, they watched 250,000 versions of Scott) of the event that didn’t even start at the same time.

Another good example of the challenges of concurrent users is World of Warcraft, a “massively multiplayer online game.” To play, users must first pick a “realm”—a discrete server which manages a complete copy of the roughly 1,500-square-kilometer virtual world, and from which they cannot see or interact with any other. In this sense, it may be more accurate to call the game “Worlds” of Warcraft. Users can move between realms, thereby philosophically uniting these many worlds into a single, “massively multiplayer” online game. However, each realm is capped to several hundred participants, and if there are too many users in a specific area, the game creates several distinct and temporary copies of this area, while splitting groups of users among them.

EVE Online stands apart from games like World of Warcraft and Fortnite because all users are part of one singular and persistent realm. But again, this is possible only due to its specific design. For example, the nature of space-based combat also means that action is limited in variety, fairly simple (think laser beams versus jumping or dancing players), and rare. Ordering a ship to mine resources from a planet, or send a succession of blasts from and to a fixed position, is far less complex than a pair of individually animated avatars dancing, jumping, and shooting one another. EVE Online is less about what the game processes and renders, but instead what humans plan and decide outside of it. And because the game is set in the vastness of space, most users are far away from one another—enabling CCP Games’ servers to effectively treat them as being in separate virtual worlds until necessary. In addition, through the creative use of “travel time,” users cannot instantaneously converge on the same location—and there is a strategic cost/risk to leaving a given location.

Even so, EVE Online inevitably encounters concurrency problems. At one point in the 2000s, a group of players realized that a specific star system, Yulai, sat near many high-traffic planets inside a major star cluster, making it an enticing spot to establish a new trading hub.4 They were right. Not long after setting up shop, many buyers began to flock to the area, which attracted additional sellers, then more buyers, and so on. Ultimately, the number of transactions that were occurring inside this hub made the CCP Games’ servers start to buckle, leading the publisher to alter the EVE Online universe so that the destination would be less convenient to visit.

The lessons from “the Yulai Problem” doubtlessly helped CCP Games design, expand, and overhaul its maps in the years that followed. However, it doesn’t help the publisher avoid another outcome: the sudden outbreak of battles so strategically important that thousands of users suddenly converge to save their faction or defeat another.

In January 2021, the largest battle in EVE’s history occurred. It was more than twice the size of the prior record and the culmination of a nearly seven-month escalation between the Imperium Faction and a coalition of enemies called PAPI. Or at least, it should have been. The only real losers were CCP Games’ servers, which could not keep up with 12,000 players appearing in a single system, and any of those players who were hoping for a decisive victory. Roughly half of the players were unable to ever enter the system, while many of those who did were placed in a sort of purgatory—if they logged into the game, they’d likely be destroyed before having a chance to enter any coherent commands, while leaving meant that their server spot might be taken up by an enemy that would destroy their allies. There was an eventual winner—Imperium—but this was mostly by default, as the defender naturally wins in a battle that never really takes place.

Concurrency is one of the foundational problems for the Metaverse, and for a fundamental reason: it leads to exponential increases in how much data must be processed, rendered, and synchronized per unit of time. It isn’t difficult to render an incredibly lush virtual world that no one can touch because it’s effectively the same as watching a video of a meticulously designed and predictable Rube Goldberg machine.§ And if players—or in this case, viewers—can’t affect this simulation, they don’t need to be continuously connected to or synchronized with it in real time, either.

The Metaverse will only become “the Metaverse” if it can support a large number of users experiencing the same event, at the same time, and in the same place, without making substantial concessions in user functionality, world interactivity, persistence, rendering quality, and so on. Just imagine how different—and limited—society would be today if only 50 to 150 people could attend any given sporting match, concert, political rally, museum, school, or mall.

However, we are far from being able to replicate the density and flexibility of the “real world.” And it is likely to remain impossible for some time. During Facebook’s 2021 Metaverse keynote, John Carmack, the former and now consulting CTO of Oculus VR (which Facebook bought in 2014 to kickstart its Metaverse transformation) mused that, “If someone had asked me in the year 2000, ‘could you build the metaverse if you had one hundred times the processing power you have on your system today . . .’ I would have said yes.” Yet 21 years later, and with the backing of one of the world’s most valuable and Metaverse-focused companies, he believed the Metaverse remained at least five to ten years away and there would be “serious optimization” tradeoffs in realizing this vision—even though there were now billions of computers that were a hundred times more powerful than the hundreds of millions of PCs operating at the turn of the century.5

What’s Missing from This Definition

So now we understand my definition of the Metaverse: “A massively scaled and interoperable network of real-time rendered 3D virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.” Many readers might be surprised that this definition, as well as its sub-descriptions, are all missing the terms “decentralization,” “Web3,” and “blockchain.” There is good reason for this surprise. In recent years, these three words have become both ubiquitous and entangled—with each other, and with the term “Metaverse.”

Web3 refers to a somewhat vaguely defined future version of the internet built around independent developers and users, rather than lumbering aggregator platforms such as Google, Apple, Microsoft, Amazon, and Facebook. It is a more decentralized version of today’s internet that many believe is best enabled by (or at least most likely through) blockchains. This is where the first point of conflation begins.

Both the Metaverse and Web3 are “successor states” to the internet as we know it today, but their definitions are quite different. Web3 does not directly require any 3D, real-time rendered, or synchronous experiences, while the Metaverse does not require decentralization, distributed databases, blockchains, or a relative shift of online power or value from platforms to users. To mix the two together is a bit like conflating the rise of democratic republics with industrialization or electrification—one is about societal formation and governance, the other is about technology and its proliferation.

The Metaverse and Web3 may nevertheless arise in tandem. Large technological transitions often lead to societal change because they typically provide a greater voice to individual consumers and enable new companies (and thus individual leaders) to emerge—many of which tap into widespread dissatisfaction with the present to pioneer a different future. It’s also true that many companies focused on the Metaverse opportunity today—especially insurgent tech/media start-ups—are building their companies around blockchain technology. As such, the success of these companies would likely lead to a rise in blockchain technology, too.

Regardless, the principles of Web3 are likely critical to establishing a thriving Metaverse. Competition is healthy for most economies, and many observers believe that the current mobile generation of the internet and computing is too concentrated among a handful of players. In addition, the Metaverse will not be built directly by the underlying platforms that enable it—just as the US federal government did not build the United States, nor the European Parliament build the European Union. Instead, it will be constructed by independent users, developers, and small-to-medium businesses, just like the physical world. Anyone who wants the Metaverse to exist—and even those who don’t—should want the Metaverse to be driven by (and primarily benefit) these groups rather than by megacorporations.

There are also other Web3 considerations, such as that of trust, which are key to the health and prospects of the Metaverse. Under centralized database and server models, Web3 advocates argue that so-called virtual or digital entitlements are a façade. The virtual hat, plot of land, or movie that a user buys cannot truly be theirs because they cannot ever control it, remove it from the server owned by the company which “sold” it, or ensure that the supposed seller won’t delete it, take it back, or alter it. With roughly $100 billion spent on such items in 2021, centralized servers obviously do not prevent considerable user spending; however, it stands to reason this spending is constrained by the need to rely on trillion-dollar platforms which will forever prioritize its interest over those of the individual user. Would you, for example, invest in a vehicle which a dealer might reclaim at any point, or renovate a house which the government might expropriate without cause or redress, or an artwork that the painter might take back once it had appreciated? The answer is sometimes, but definitely not to the same degree. This dynamic is particularly problematic for developers, who must build virtual stores, businesses, and brands despite the inability to guarantee they’ll be allowed to operate in the future (and might instead find that the only way to operate is to pay their virtual landlord twice the rent). Legal systems may eventually be updated to provide users and developers with greater authority over their wares, data, and investment, but decentralization, some claim, makes the reliance on court orders unnecessary and their very existence inefficient.

Yet another question is whether centralized server models can ever support a nearly infinite, persistent, world-scale Metaverse. Some believe that the only way to provide the computing resources needed for the Metaverse is through a decentralized network of individually owned—and compensated—servers and devices. But I’m getting ahead of myself.

* After Epic Games sued Apple in August 2020, Apple removed Fortnite from its App Store, thereby making it impossible for users to play the game on iOS devices.

“Skeuomorphism” refers to a technique used in graphical design in which interfaces are designed to mimic their real-world counterparts. For example, the iPhone’s first “Notes” app involved typing on yellow paper with red lines, just like the common notepad.

This is often referred to as a “persistent” connection, but in the interest of differentiating it from the persistence of a virtual world, I’ll use the term “continuous” here.

§ These are intricate, chain reaction–styled machines that perform relatively simple tasks through a complex sequence of events. For example, a ball might be placed in a cup by first tipping over domino, which in turn hits many other dominos, ultimately turning on a fan which blows the ball down a rail, before the ball flies into the air, falls down a series of platforms, and then finally lands into its destined cup.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!