The concept of the metaverse has catapulted into the public consciousness. But in a sea of emerging technologies, it is nascent and still not all that well understood.
Ipsos defines the metaverse as "a virtual, computer-generated world where people can socialise, work and play."
No matter the final scope and texture of the metaverse, one thing is clear: the ultimate metaverse will need to depict real experiences as accurately as humanly - or rather, as technically - as possible. The promise and vision of the metaverse will be to completely change the way we experience things built with computers and code. Reality and society can't glitch. Our immersive experience can't stop because of a third-party code change or configuration error. The metaverse requires 'infinite nines' of availability; and globally intelligent redundancy, rerouting and failover; to deliver an always-on experience.
As keen observers, we're particularly interested in what changes the metaverse might bring to the fields of application development and digital experience. What follows is our initial thoughts in this space.
Code characteristics
The world is already in many ways moving to real-time or near real-time processing. In the financial world, straight-through processing means transactions occur as they are initiated rather than in day-end batches. Mid-sized to larger organisations are also implementing large event-streaming platforms and ingesting these streams into huge analytics engines to understand customer needs and system responses to real-world conditions on-the-fly.
The metaverse takes this to another level. As one research paper notes, "the metaverse is expected to be real-time persistent with no ability to pause it. It continues to exist and function even after users have left. This trait shifts away the centricity of the user to the virtual world itself."
One can only begin to imagine the resiliency and performance characteristics of the underlying network that will be required to support and deliver the metaverse to the world. Purported to change the future of the Internet, the metaverse will be very different from the patchwork of autonomous systems and providers that collectively make up today's internet infrastructure. It will also need to be truly global and redundant; capable of keeping people in the metaverse experience, regardless of connection adversity or ambient traffic conditions.
Any software application coded for the metaverse will need to be designed with the underlying network – and specifically any constraints posed by that network – in mind. That is very different from how many applications are currently designed, often with only passing consideration put into how they will perform on different types of networks that exhibit varying latency and performance characteristics.
Today's web-based applications are already heavily reliant on a large number of dependencies and interdependent systems and services in order to function. A break or vulnerability in that chain can already cause degradation or loss of service. The metaverse, and the applications coded for it, are likely to be made up of even more tightly integrated dependencies. In the event that one part of the experience fails to render, the whole experience will be materially impacted. Part of the metaverse may simply fail to appear in front of us. In an immersive virtual experience in which we are active participants, having a piece of reality fail before us is likely to be quite disorienting.
There's already evidence that metaverse developers won't tolerate these kinds of experience glitches either. Many are currently developing on multiple metaverse platforms, effectively hedging their bets as to which might first gain traction or offer a higher level of experience or resiliency. Metaverses that are unreliable, either due to the way they are coded or due to underlying operating systems, protocols or infrastructure constraints, may not get a second chance with their developer or user ecosystems. With so much competition and money in the space, the pressure to get things right the first time is intense.
It isn't just code quality and network resiliency that will determine whether a metaverse succeeds or fails. Delivery of metaverse experiences will also require new types of monitoring.
Developers will likely have access to some open telemetry, courtesy of their metaverse platform of choice. However, they may also wish to instrument different parts of the end-to-end experience independently to verify that each is functioning and responding as intended. A collective intelligence approach will help to ensure that developers have access to the right combination of metrics to judge the health and performance of the metaverse, and of their specific contribution to it.
The metaverse requires us to accept new realities: principally the one we'll live in; but also one that's powered by computers, code and the internet, but doesn't share any of their glitchy, error-prone characteristics; and one where concepts like 'downtime' and 'uptime' cease to exist, because the only acceptable availability level for reality is 'all of the time'.