Chapter 13
WHAT, THEN, MIGHT DEVELOPERS SOON PRODUCE? Throughout this book, I’ve avoided describing the “Metaverse in 2030” or offering any claims about what society will look like, overall, after the Metaverse arrives. The challenge with such broad prognostications is the feedback loops in between now and that date. An unforeseen technology will be created in 2023 or 2024 that in turn inspires new creations, or leads to new user behaviors, or manifests a new use case for that technology, leading to other innovations, changes, and applications, and so on. However, there are a few areas that will likely be transformed by the Metaverse in ways that in the short term, at least, can be said to be predictable. Millions if not billions of users and dollars will be drawn to the new experiences that result. With all the necessary caveats in mind, it is worth looking at what these transformations might look like.
Education
The best example of impending transformation might be education. The sector is of critical importance to both society and the economy, and educational resources are scarce and starkly unequal in their distribution. It is also the leading example of what’s known as “Baumol’s Cost Disease,” which refers to “the rise of salaries in jobs that have experienced no or low increase of labor productivity, in response to rising salaries in other jobs that have experienced higher labor productivity growth.”1
This is not a critique of teachers. Rather, it reflects the fact that most jobs have become far more “productive,” in economic terms, as a result of the many new digital technologies and developments over the past several decades. For example, an accountant has become far more efficient as a result of computerized databases and software such as Microsoft Office. An accountant today can do more “work” per unit of time, or manage more clients in the same amount of time, than an accountant could in the 1950s. The same is true for janitorial and security services, which now take advantage of more powerful motorized cleaning tools, or can monitor a facility using a network of digital cameras, sensors, and communications devices. Healthcare remains a labor-driven sector, but advances in diagnostics and therapeutic and life support technologies have helped to offset many of the costs associated with an aging population.
Teaching has seen a smaller increase in productivity compared to almost all other categories. A teacher in 2022 cannot, by most measures, teach more students than they could decades ago without adversely affecting the quality of their education. In addition, we have not found ways to teach for less time, either (that is, to teach faster). However, teaching salaries must compete with the salaries offered to someone who might otherwise become an accountant (or software engineer, or game designer), and must rise with the rising cost of living as a result of a growing economy. And beyond teacher time, education remains incredibly resource-intensive in terms of physical resources, from the size of the school, the quality of its facilities, and the quality of supplies. In fact, costs associated with these resources have partly increased due to new, more expensive technologies (for example, high-definition cameras and projectors, iPads, and so on).
The relative lack of productivity growth in education is demonstrated by its relative increases in costs. The US Bureau of Labor Statistics estimates that the cost of the average good in January 1980 has increased over 260% through January 2020, whereas the cost of college tuition and fees has grown 1,200%.2 The second-closest sector, medical care and services, is up 600%.
While education has long lagged productivity growth in the West, technologists have been expecting it to beat most industry benchmarks. The assumption was that high school, colleges, and especially trade schools would be fundamentally reconfigured and displaced by remote learning. Many, if not most, students would learn remotely, not in the classroom but through on demand video, livestreamed classes, and AI-powered multiple choice. But among COVID’s top lessons was that “Zoomschool” is terrible. There are many challenges when it comes to learning through a screen, but for the most part, we assume that we lose more than we might gain (or save financially).
The most obvious loss with remote learning is of “presence.” When they are inside the classroom, students are in an education environment; they have agency and immersion that’s totally unlike anything offered by a camera through which they can peer into an untouchable school set. Why presence matters is rather beside the point—but pedagogical research does show the clear benefits of sending students on field trips rather than limiting them to videos, of asking them to come to school rather than listen to recordings at home, and of encouraging them to learn “hands-on” whenever possible. The loss of presence entails the loss of everything from eye contact with (and scrutiny from) a teacher, the ability to co-learn alongside friends, and tactility, to the ability to build a hydraulic robot with syringes, use a Bunsen burner, and dissect a frog, fetal pig, or feral cat.
It is difficult to imagine at-home or at-distance education ever fully substituting for in-person education. But we are slowly closing the gap through new and predominantly Metaverse-focused technologies, such as volumetric display, VR and AR headsets, haptics, and eye-tracking cameras.
Not only are real-time rendered 3D technologies helping educators to take the classroom (and classmates) anywhere, but the rich virtual simulations that are on the horizon can greatly augment the learning process. At first, VR in the classroom was envisioned as little more than the ability to “visit” ancient Rome (incidentally, “visiting” Rome was long considered the “killer app” for VR headsets, but it turned out to be rather dull). Instead, students will “build Rome in a semester” and learn how aqueducts work by constructing them. Many students today and in past decades learned about gravity by watching their teacher drop a feather and a hammer, and then seeing a tape of Apollo 15 commander David Scott do the same on the moon (spoiler: they fall at the same speed). Such demonstrations need not go away, but they can be supplemented by the creation of elaborate, and virtual-only, Rube Goldberg machines, which students can then test under Earth-like gravity, on Mars, and even under sulfuric rainfalls of the Venetian upper atmosphere. Rather than create a volcanic eruption using vinegar and baking soda, students will immerse themselves in a volcano and then agitate its magma pools before they’re both ejected into the sky.
Everything once imagined in The Magic School Bus, in other words, will become virtually possible—and at a greater scale, too. Unlike a physical classroom experience, these lessons will be available on demand, from anywhere around the world, and fully accessible (and more easily customized) to those students with physical or social disabilities. Some classes will include presentations from professional instructors whose live performances were motion captured and audio recorded. And as these experiences have no marginal costs—that is, they do not require extra time from a teacher, nor do they deplete supplies no matter how many times they’re run—they can be priced at a fraction of the costs associated with learning that occurs in the classroom. Every student will be able to perform a dissection, no matter how wealthy their parents or the funding of their local school board. Indeed, these students will not even need to attend a school (and if they like, they’ll be able to travel through the creature’s various organ systems, rather than just cut them open).
Crucially, it will still be possible for these virtual classes to be supplemented by a dedicated, live teacher. Imagine the “real” Jane Goodall reproduced in a virtual environment and guiding students through Tanzania’s Gombe Stream National Park, with these students’ “homeroom” teacher joining in and further personalizing the experience. The costs involved with such an experience will be a fraction of that involved in a real field trip—certainly one to Tanzania—and may even offer more than such a trip could.
None of this is to suggest that education involving VR and virtual worlds will be easy. Pedagogy is an art, and learning is hard to measure. But it’s not difficult to imagine how virtual experiences might enhance learning while also expanding access and reducing its costs. There will be less of a gap between in-person and at-distance educations, competitive marketplaces for pre-made lessons and live tutors, and exponentially greater reach for great teachers and their work.
Careful readers will note that such experiences do not by themselves make, nor require, the Metaverse. It’s possible for compelling real-time rendered 3D worlds focused on education to exist without the Metaverse. However, interoperation between these experiences and all others, as well as the real world, is of obvious value. If users can bring their avatars to these worlds, they’re likely to use them more often. If their educational account history can be written “in school,” and then read and expanded upon elsewhere, learners will be more likely to keep learning and their experiences will be more richly personalized.
Lifestyle Businesses
Education is just one of many socially focused experiences that will be transformed by the Metaverse. Today, millions of people exercise each day using digital services such as Peloton, which offers live and on demand video-based cycling classes with gamified leaderboards and high-score tracking, and Mirror, a Lululemon subsidiary that boasts a wider range of fitness routines delivered by a partly transparent instructor projected through a reflective mirror. Peloton has since expanded into real-time rendered virtual games, such as Lanebreak, where a cyclist controls a wheel rolling across a fantastical track to earn points and dodge obstacles. This is a sign of things to come; perhaps sometime soon, our morning routine will involve our Roblox avatar cycling across the snowy Star Wars planet of Hoth through a Peloton application on our Facebook VR headset, all while chatting with our friends.
Mindfulness, meditation, physiotherapy, and psychotherapy are likely to be similarly altered, by a mix of electromyographic sensors, volumetric holographic displays, immersive headsets, and projection and tracking cameras that collectively provide support, stimulation, and simulation never before possible.
Dating is another fascinating category when considering the impact of the Metaverse. Prior to the launch of Tinder, some believed that online dating had been “solved”—all one had to do was fill out dozens to hundreds of multiple-choice quizzes that would be crunched into a mystery compatibility score through which two would-be lovebirds would be matched. But this belief and the companies built on it were disrupted by a photo-based model in which users “swipe right” or “swipe left” to see if there’s a shared interest in chatting, and with the average user spending between three to seven seconds making such a choice.3 In recent years, dating applications have added new features for matched couples, such as casual games and quizzes, voice notes, and the ability to share their favorite playlists on Spotify and Apple Music. In the future, dating applications will likely offer couples a variety of immersive virtual worlds which help a would-be pairing get to know one another. These might span simulated reality (“dinner in Paris”) or the fantastical (“dinner in Paris . . . on the Moon”), include live performances from motion captured avatars* (imagine mariachis or attending a digital twin of London’s Royal Ballet, but from Atlanta), and potentially lead to reinventions of classic game-show formats such as The Dating Game. It’s also likely these apps integrate into third-party virtual worlds (this is the Metaverse, after all), enabling, as an example, a matched couple to easily jump into a virtual Peloton or Headspace-based experience.
Entertainment
It’s increasingly common to hear that the future of “linear media” such as films and TV shows is VR and AR. Rather than watch Game of Thrones or the Golden State Warriors play the Cleveland Cavaliers on our couch sitting in front of our 30 × 60–inch flatscreen, we will put on a VR headset and watch shows on simulated IMAX-sized screens, or sit courtside—with our friends sitting beside us. Alternatively, we might watch via augmented reality glasses that make it seem like we still have a living room TV. The films and TV shows, of course, will be filmed for 360° immersion. When Travis Bickle says “You talkin’ to me?,” you can be virtually standing in front of, or even behind, him.
These predictions remind me of how many once envisioned newspapers like the New York Times would be altered by the internet.4 In the 1990s, some believed that “in the future” the Times would send a PDF of each day’s edition to every subscriber’s printer, which would then dutifully print it before its owner woke up—thereby obviating the need for costly printing presses and elaborate home delivery systems. The more daring theorists imagined this PDF might even exclude sections the individual reader did not want, thereby saving both paper and ink. Decades later, the Times does offer this option, but almost no one uses it. Instead, subscribers access a constantly changing and never-printed online copy of the paper that has no clear divisions between sections and essentially cannot be read “front to back.” Most news readers don’t even start with a newspaper at all. Instead, they consume their news via aggregator solutions such as Apple News, and social media newsfeeds which intermingle countless stories from disparate publishers, alongside photos of your friends and family.
The future of entertainment will probably involve similar remixing. “Film” and “TV” will not go away—just as oral storytelling, serials, novels, and radio shows still exist centuries after they were first created—but we can expect rich interconnection between film and interactive experiences (broadly considered “games”). Facilitating this transformation is the increasing use of real-time rendering engines, such as Unreal and Unity, in filmmaking.
Historically, movies such as Harry Potter or Star Wars have used non-real-time rendering software. There was no need to produce a frame in milliseconds during the production process and so it made sense to spend more time (anywhere from one additional millisecond to several days) making the image look more realistic or detailed. In addition, the goal of the computer graphics department was to virtually produce an already-known image (that is, one based on a storyboard). As such, moviemakers didn’t need to “build Manhattan” or even a single street in the West Village in order to support a set piece in The Avengers, least of all a street that could simulate the “real New York” and anything that might happen to it when aliens invade and Infinity Stones are involved.
But over the past five years, Hollywood has progressively integrated real-time rendering engines, most typically Unity and Unreal, into their filming process. For 2019’s The Lion King, a purely CGI-based film but one that was designed to look like “live action,” the director Jon Favreau immersed himself into each scene through a Unity-based re-creation, often while wearing a VR headset. This allowed him to understand a purely virtual set as though it were a typical “real world” film shoot—a process that he claims aided everything from where to place and angle a shot, to how the camera would track its fictional leads, as well as the lighting and coloring of the environment. The final rendering was still produced in Maya, non-real-time animation software published by Autodesk.
Building upon his work on The Lion King, Favreau helped pioneer “virtual production” stages where an enormous circular room is constructed using walls and ceilings made of high-density LEDs (the rooms themselves are called “volumes”). The LEDs were then lit up with Unreal-based real-time renders. This innovation provided a number of benefits. The simplest was that it allowed everyone inside the volume to experience what Favreau did in VR, but without wearing a headset. It also meant that “real people” could be seen inside the environment too—rather than everyone just watching pre-planned animations of Timon & Pumbaa. In addition, the cast could be affected by the volume’s LEDs; the light shining down from a virtual sun would recolor an actor directly and provide them with an accurate shadow—it wouldn’t need to be applied or corrected in “post-production.” A set could have the perfect sunset year-round—and years later that exact same setting could be reproduced in seconds.
One of the leaders in virtual production is Industrial Light & Magic, the visual effects company founded by Star Wars creator George Lucas and that is now owned by Disney. ILM estimates that when a film or series is designed for LED volumes, it’s possible to film 30% to 50% faster than when shooting through a mixture of “real world” and “green screen” sets, and that postproduction costs are lower, too. ILM points to the hit Star Wars TV series, The Mandalorian, which was created and directed by Favreau and cost roughly one quarter as much per minute as the typical Star Wars film (it was also better received by both critics and viewers). Nearly all of the show’s first season—which spanned an unnamed ice world, the desert planet Nevarro, the forested Sorgan, deep space, and dozens of subsets in each—was shot on a single virtual stage in Manhattan Beach, California.
What does virtual production have to do with the Metaverse beyond the use of similar engines and virtual worlds? The connections start with “virtual backlots.” If you visit Disney’s physical studio backlot, you’ll find stages and lockers full of old Captain America costumes, miniature models of the Death Star, and the literal living rooms of Modern Family, New Girl, and How I Met Your Mother. Now, Disney’s servers are being filled with virtual versions of every 3D object, texture, outfit, environment, building, facial scan, and anything else it has made. This doesn’t just make it easier to film a sequel—it makes it easier to make all derivative works. If Peloton wants to sell a course set on the Death Star or the Avengers’ Campus, it can repurpose (in other words, license) much of what Disney has made. If Tinder wants to offer virtual dates on Mustafar, the same applies. Instead of playing blackjack via the video-based iCasino, why not play on Canto Bight? Rather than launch a Star Wars integration in Fortnite, Disney will just populate their own mini-worlds on Fortnite Creative using what they’ve already built.
These won’t just be opportunities to personally experience the filmed world of Star Wars, either. They will become a core part of the storytelling experience. In between weekly episodes of The Mandalorian or Batman, fans will be able to join their heroes in canonical (or noncanonical) events and side missions. At 9 p.m. on a Wednesday night, for example, Marvel might tweet that the Avengers “need our help,” with Tony Stark, as live-performed by Robert Downey Jr. (or perhaps someone who bears little resemblance to him but steers an avatar that does), leading the way. Alternatively, fans will have the opportunity to live out what they watched in a movie or show. The end of The Avengers: Age of Ultron in 2015 involved the titular heroes fighting a legion of evil robots on a chunk of land floating above the earth. In 2030, players will have the chance to do the same.
Similar opportunities will open up to sports fans. We may use VR to sit virtually courtside, but it’s more likely the games we watch are nearly instantaneously captured and reproduced into a “video game.” If you own NBA 2K27, you’ll be able to jump into a specific moment from a game that finished only minutes earlier and then see if you could have won the game—or at least made the shot that a star player didn’t. Sports fandom is currently isolated between watching a game, playing a sports video game, participating in fantasy sports, making online wagers, and buying NFTs, but we’ll likely find that each of these experiences melds together and in doing so, creates new ones.
Betting and gambling will be transformed as well. There are already tens of millions of people placing online wagers, using Zoom-based casinos, or enjoying game-based casinos such as the Be Lucky: Los Santos in Grand Theft Auto. In the future, many of us will go to Metaverse casinos where we’re served by live, motion capture–powered dealers while enjoying live, motion capture–powered musical performances. Or recall Zed Run from Chapter 11. Each week, hundreds of thousands of dollars are bet on its virtual horse races, with many of these horses worth millions. The economy of Zed Run is upheld through its blockchain-based programming, which provides bettors with the trust that the races are not rigged and horse owners with the faith that the “genes” of their virtual horses will be programmatically passed on when they’re bred.
Others are reimagining entertainment at a more abstract level. From December 2020 to March 2021, Genvid Technologies hosted a “Massively Interactive Live Event” (MILE) on Facebook Watch, called Rival Peak. The title was a sort of virtual mashup of American Idol, Big Brother, and Lost. Thirteen AI contestants were trapped in a remote part of the Pacific Northwest, and the audience could watch them interact, fight to survive, and uncover various mysteries through dozens of cameras running 24 hours a day for all 13 weeks. While the audience could not directly control a given character, they could still affect the simulation in real time—solving puzzles to aid a given hero or create an obstacle for a villain, weighing in on the choices of the AI characters, and voting on who would be booted off the island. Though visually and creatively primitive, Rival Peak is an indication of what the future of live interactive entertainment could look like—that is, not supporting linear stories, but collectively producing an interactive one. In 2022, Genvid launched The Walking Dead: The Last M.I.L.E. with the comic book franchises, Robert Kirkman, and his company, Skybound Entertainment. The experience allows viewers, for the first time, to decide who lives and who dies in The Walking Dead, while also steering competing factions of humans toward, or away from, conflict. Audience members can also design their own avatars, who will then be released into the world and folded into the story. What might come next? Well, most of us don’t want a real “Hunger Games,” but it might be fun to watch a high fidelity real-time rendered version played by our favorite actors, sports stars, and even politicians, each of whom participates via avatar.
Sex and Sex Work
Changes to the sex work industry are likely to be even more profound than those experienced by Hollywood, and in the process, further blur the line between pornography and prostitution. In 2022, one can hire a sex worker for a private online show and even take control of their smart sex toys (or provide them with control over yours). What might this look and feel like with an ever-growing number of internet-connected haptic devices, improvements in real-time rendering, immersive AR and VR headsets, and high-concurrency GPUs? Some of the results are relatively easy to imagine (“Sex, but in VR!”), others less so. Recall from Chapter 9 how armbands from CTRL-labs could use electromyography to reproduce precise finger movements—or to map the muscle movements used to move a finger to an entirely different motion, such as controlling the legs of a spider. With that in mind, what is sex experienced through an ultrasonic force field? Or when five, 100, or 10,000 “concurrent users” combine to construct some form of real-time rendered, mixed reality orgy, rather than a concert or battle royale?
Of course, such experiences raise the potential for considerable abuse (more on this soon), but also questions of platform power. None of the major mobile or console computing platforms enable for sex or pornography-based applications. PornHub.com, which typically ranks among the 70–80 most used websites in the world; Chaturbate, which ranks in the top 50; and OnlyFans, which ranks in the top 500 but whose revenue exceeds that of The Match Group (owners of Tinder, Match.com, Hinge, PlentyofFish, OkCupid, and more), are not permitted in the iOS or Android app stores. The justification for prohibition varies. Steve Jobs once told a user that Apple does “believe we have a moral responsibility to keep porn off the iPhone,” though some speculate these policies are intended to avoid liability and the optics of taking a commission from sex work. The result doubtlessly harms individual sex workers—as I’ve mentioned often throughout this book, applications strongly outperform browser-based experiences in terms of usage and monetization—though pornography, as a category, still thrives. Videos and photos work well enough from a mobile web browser, and by and large, consumers are not deterred by the need to use them.
But as we’ve seen, richly rendered VR and AR experiences are essentially impossible via mobile web browsers. Accordingly, the policies of Apple, Amazon, Google, PlayStation, and others effectively block the entire category’s advancement. Some might see this as a good thing; others could argue it deprives sex workers of higher incomes and greater safety.
Fashion and Advertising
For the past 60 years, virtual worlds have been largely ignored by advertisers and fashion houses. Today, less than 5% of video gaming revenue comes from advertising. In contrast, most major media categories, such as TV, audio (inclusive of music, talk radio, podcasts, and so on), and news generate 50% or more of their revenues from advertisers, rather than audiences. And although hundreds of millions entertain themselves in the virtual worlds each year, 2021 was the first time that brands such as Adidas, Moncler, Balenciaga, Gucci, and Prada saw these spaces as deserving of any real attention. This will need to change.
Advertising in virtual spaces is difficult for a few reasons. First, the gaming industry was “offline” for the first several decades and each title took years to produce. As a result, there was no way to update a game’s in-game advertising, meaning any placed ads could quickly become out of date. This is also why books typically lack ads, save for those promoting the author’s other works, even though newspapers and magazines historically relied upon them. Ford won’t pay much for an ad that, for most readers, is touting the “specs” of an old car (Ford would probably consider such impressions harmful). Technical limitations of this kind no longer exist for video games, as they can now be updated over the internet, but the cultural consequences endure. With the exception of casual mobile games like Candy Crush, the gaming community is largely unfamiliar with and highly resistant to in-game advertising. Even though few consumers of television, print magazines and newspapers, and radio enjoy the ads that often litter these mediums, ads have always been part of the expereince.
The bigger issue might be determining what an ad is or should be in a real-time rendered 3D virtual world—and how to price it and sell it. For much of the 20th century, most ads were individually negotiated and placed. That is, someone at a company like Procter & Gamble would work with someone at CBS so that an Ivory Soap ad would air as the first commercial in the second ad block in the 9 p.m. airing of I Love Lucy and at a specific price. Most digital advertising today is done programmatically. For example, advertisers will say who they want to target, with what ads (a banner image, a sponsored social media post, a sponsored search result, and so on), up until a certain amount of money has been spent at a given cost-per-click or a set amount of time has transpired.
Finding the core “ad unit” for 3D-rendered virtual worlds is a challenge. Many games have in-game billboards, including the PlayStation 4 game Marvel’s Spider-Man, which is set in Manhattan, and the cross-platform hit Fortnite. However, their implementations are quite different. The size of these posters might vary by multiples, meaning a different image would likely be needed for one versus another (whereas Google Ad Words work regardless of screen size). In addition, players might pass by these posters at varying speeds, from varying distances, and in various situations (a leisurely walk versus an intense firefight). All of this makes it hard to value either game’s billboards, let alone buy them programmatically. There are many other potential ad units inside a virtual world—commercials played by in-game car radios, virtual soft drinks branded like real-world ones—but these are even harder to design for and measure. Then there are the technical complexities of inserting personalized ads into synchronous experiences, determining when an ad should be shared with your friends or not (it makes sense for the whole squad to see a banner for the next Avengers movie, but not necessarily for a medicinal cream), and so on.
Augmented reality advertising is conceptually easier, as the canvas for said ads is the real world rather than myriad virtual ones, but the execution is perhaps even harder. If users are inundated with unprompted or obtrusive ads overlayed atop the real world, they’ll change headsets. The risk of these ads causing an accident is also high.
In the United States, advertising expenditures have comprised 0.9% to 1.1% of GDP for more than a century (with temporary exceptions during the world wars). If the Metaverse is to be a major economic force, ad buyers will have to find a way to be relevant in it and the ad tech industry will eventually figure out how to offer and adequately measure programmatic ads placed across myriad virtual spaces and objects in the Metaverse.
Still, some argue that the Metaverse will require a more fundamental rethinking of how to advertise a given product.
In 2019, Nike built an immersive Fortnite Creative Mode world under the Air Jordan brand, entitled “Downtown Drop.” In it, players raced through the streets of a fantastical city while wearing rocket-powered shoes, performing tricks and collecting coins to beat other players. While players could purchase and unlock exclusive Air Jordan avatars and items during and through this “limited time mode,” the goal of “Downtown Drop” was to express the ethos of Nike’s Air Jordan—for players to know what the brand felt like, no matter the medium. In September 2021, Tim Sweeney told the Washington Post that a “carmaker who wants to make a presence in the metaverse isn’t going to run ads. They’re going to drop their car into the [virtual] world in real time and you’ll be able to drive it around. And they’re going to work with lots of content creators with different experiences to ensure their car is playable here and there, and that it’s receiving the attention it deserves.”5
Needless to say, dropping a new, drivable car model into a virtual world is much trickier than placing market copy into targeted search results, telling a compelling 30-second or two-minute story in a commercial, or producing a “native advertisement” with a YouTuber. It requires building experiences and virtual products that users actively choose to engage with and use in lieu of the entertainment they originally sought out. And almost no ad agencies or marketing departments today have even the basic skillsets required to build such experiences. Still, the likely profits from successful advertising in the Metaverse, the necessity of differentiation, and the lessons of the consumer internet era seem likely to inspire significant experimentation in the years to come.
Upstart brands such as Casper, Quip, Ro, Warby Parker, Allbirds, and Dollar Shave Club didn’t just take advantage of direct-to-consumer e-commerce models—they also won market share from long-standing incumbents through novel marketing techniques such as search engine optimization, A/B testing, and referral codes, and developing unique social media identities. But in 2022, these strategies are not novel—they’re commodity, table-stakes, dull. They enable no brand, new or old, to find new audiences or stand out. Virtual worlds, however, remain largely unconquered territory.
For the same reasons, today’s fashion brands will also need to “enter the Metaverse.” As more of human culture shifts into virtual worlds, individuals will seek out new ways to express their identities and show off. This is demonstrated clearly through Fortnite, which has spent several years generating more revenue than any other game in history, and primarily monetizes through the sale of cosmetic items (and as I mentioned earlier, these revenues exceed many of the top fashion labels, too). NFTs reiterate this as well. The most successful NFT collections are not for virtual goods nor trading cards but identity- and community-oriented “profile pictures” such as Cryptopunks and Bored Apes.
If today’s labels do not meet this need, new labels will emerge which will replace them. In addition, the Metaverse will place pressure on the physical sales of many companies, such as Louis Vuitton and Balenciaga. If more work and leisure occur in virtual spaces, then we’ll need fewer purses and probably spend less on those we do buy. But to this end, these labels will likely use their physical sales to facilitate and bolster the value of their digital ones. For example, a consumer who buys a physical Brooklyn Nets jersey or Prada bag might also get the rights to a virtual or NFT simulacra, or a discount when buying one. Or perhaps only those who do buy “the right thing” can get a digital copy. In other cases, a digital purchase might lead to a physical one. Our identities, after all, are not purely online or offline, physical or metaphysical. They persist, just like the Metaverse.
Industry
In Chapter 4, I highlighted how and why the Metaverse would start with consumer leisure and then move into industry and enterprise, rather than the reverse, as happened with prior computing and networking waves. The expansion into industry will be slow. The technical requirements for simulation fidelity and flexibility are much higher than in games or film, while success ultimately depends on reeducating employees who have been trained around now-legacy software solutions and business processes. And to start, most “Metaverse investments” will be premised upon hypotheses, rather than best practices—meaning investments will be constrained and the profits often disappointing. But eventually, and with the current internet, much of the Metaverse and its revenues will exist and occur out of sight from the average consumer.
Consider, as an example, the 56-acre, 20-building, multi-billion-dollar redevelopment of Water Street in Tampa, Florida. As part of this project, Strategic Development Partners produced a 17-foot-diameter, 3D-printed, and modular scale model of the city, which was then supported by twelve 5K laser cameras that projected 25 million pixels atop this model, based on city data feeds for weather, traffic, population density, and more. All of this was run by an Unreal-based real-time rendered simulation that could be viewed through a touchscreen or VR headset.
The perks of such a simulation are difficult to describe in writing for the very reason SDP saw value in building a physical model and 3D digital twin in the first place. However, SDP enabled the city, prospective tenants, and investors, as well as construction partners, to understand and plan for the project in unique ways. It was possible to see exactly how present-day Tampa would be affected by the construction process, as well as by the completed project. How would a five-year build affect local traffic and how would the effects be different from those of a six-year build? What would happen if a given building were replaced by a park, or its floors reduced from 15 to 11? How would the views of other buildings and parks in the area be affected by the development, including through refracted light or radiated heat—and at any time or day in the year? How would these buildings shape emergency response times in the area? Might they require a new police, fire, or ambulance station? On which sides of the buildings should a fire escape be built?
Today, these simulations are primarily used to design and understand a building or project. Eventually, they will be used to operate the resulting buildings and the businesses they house. For example, the signage (physical, digital, and virtual) inside a Starbucks will be selected and altered based on real-time tracking of which sorts of customers use the store and when, as well as remaining inventory in that location. The mall where a Starbucks is located will also direct customers to that site, or discourage them from doing so, based on its lines and the proximity of substitutes (or another Starbucks). And the mall will connect into the city’s underlying infrastructure systems, thereby enabling AI-powered traffic light networks to operate with more (that is, better) information, and helping city services, such as fire and police, better respond to emergencies.
Though these examples focus on what’s called “AEC,” or architecture, engineering, and construction, such ideas are easily repurposed to other use cases. Various militaries around the world have been using 3D simulations for years—and as discussed in the hardware chapter, the US Army awarded Microsoft a contract worth more than $20 billion for HoloLens headsets and software. The utility of digital twins in aerospace and defense companies, too, is obvious (if perhaps even more terrifying than the army using VR). More hopeful is medicine and healthcare. Just as students might use 3D simulation to explore the human body, so too will physicians. In 2021, neurosurgeons at Johns Hopkins performed the hospital’s first-ever AR-surgery on a live patient. According to Dr. Timothy Witham, who led the surgery and is the director of the hospital’s Spinal Fusion Laboratory, “It’s like having a GPS navigator in front of your eyes in a natural way so you don’t have to look at a separate screen to see your patient’s CT scan.”6
Dr. Witham’s GPS analogy reveals the critical difference between the so-called minimum viable product of commercial AR/VR and that for consumer leisure. To gain adoption, consumer VR/AR headsets must be more compelling or functional than the experiences offered by alternatives, such as a console video game or smartphone messaging app. The immersion offered by mixed-reality devices is a differentiator, but as discussed in Chapter 9, there are still many drawbacks. For example, Fortnite can be played on nearly any device, which means a user can play with anyone they know. Population: One is essentially limited to those who own a VR headset. In addition, Fortnite can also be experienced at a higher resolution, with greater visual fidelity, higher frame rates, more concurrent users, and without the risk of nausea. For many gamers, VR games are not yet good enough to successfully compete with console-, PC-, or smartphone-based titles. But comparing surgery with AR to surgery without it is like comparing driving with GPS to driving without it—the trip will be made regardless of whether the technology exists, while its use depends on whether it has a meaningful impact on the outcome (e.g., a shorter drive time). For surgery, this means a higher success rate, faster recovery time, or lower cost. And while the technical limitations of today’s AR/VR devices doubtlessly limit their contributions to surgery, even a slight impact will justify their cost and use.
* Neal Stephenson described this sort of technology and experience at length in The Diamond Age, which was published in 1995, three years after Snow Crash. He called such products interactive books, or “ractives” for short, with performers known as “ractors,” as in interactive actors.