The preceding year has borne witness to an unprecedented acceleration of technological advancements, transforming what was once confined to the realm of science fiction into tangible reality. In this whirlwind of progress, one domain that has undergone remarkable evolution is that of Virtual Reality (VR), now arguably transcending its conventional nomenclature. It has assumed a new title, one I have provided: Simulated Reality.
To elucidate the rationale behind this nomenclature shift, it becomes imperative to explore the amalgamation of diverse technological disciplines within the VR landscape. Notably, the current trajectory of the VR market is marked by a conspicuous integration of Augmented Reality (AR), often denoted as Mixed Reality (MR). Pioneering this convergence are industry titans such as Apple, with the Vision Pro, and Meta, with the Quest 3. These devices not only inherit the established functionalities of traditional VR headsets but also introduce innovative features, the foremost being an array of cameras facilitating passthrough environments. The incorporation of Augmented Reality is further complemented by two pivotal technological strides witnessed in the past year: eye tracking and hand tracking. The former, exemplified prominently in the Vision Pro, is a culmination of years of development, now entering widespread production. Leveraging the human tendency to convey intent through gaze, eye tracking emerges as a potent instrument for VR headsets. The latter, hand tracking, harnesses the external cameras used for passthrough to discern intent from hand gestures. This technological feat owes much to recent breakthroughs and optimizations in artificial intelligence, with onboard device models achieving unprecedented accuracy, even on hardware as dated as the Quest 2.
The synergy of hand tracking and eye tracking alone propels VR interaction to a level approaching human-like fluidity. Yet, this merely scratches the surface of the innovations on the horizon. Delving deeper into the features under exploration for the next generation of headsets unveils a realm characterized by non-invasive Brain-Machine Interfaces (BMIs). Once perceived as rudimentary, electroencephalograms (EEGs) have undergone remarkable enhancements fueled by compact diffusion models. Experiments have demonstrated the ability to generate images of a person’s visualizations, compose text with near-real-time accuracy, and more. Economically viable EEGs, some as discreet as earbuds, pave the way for surreal interactions with the virtual world. Tasks are envisioned, not manipulated through tools; the friction between brain and machine, epitomized by manual interaction, is eliminated.
Embark with me for a moment on a brief journey into the future, envisioning a landscape five years from our present. In the immediate future, major brands will traverse the road to innovation, initially introducing incremental enhancements to the somewhat niche VR market. These early iterations, characterized by bulkiness, costliness, and a penchant for catering to the tinkering enthusiasts, lay the foundation for the current state of affairs. However, a paradigm shift awaits in the next upcoming two years. The next wave of VR headsets will undergo a profound metamorphosis, transcending the realm of virtual reality competition to engage in a direct rivalry with smartphones. Despite retaining a degree of bulk, these headsets will boast a form designed for comfort, inviting users into an immersive experience without compromise. We are already seeing attempts at this with Meta’s ‘Mirror Lake‘ project and Ray-ban smart glasses. Yet, their allure extends beyond mere aesthetics, as these devices redefine functionality. They promise a more organic interaction with content, mirroring the familiar features found on smartphones but on an infinite screen with minimized gestures. A tantalizing prospect becomes more tangible with companies edging closer to the production of ultrathin headset screens, heralding a reality closer than commonly perceived.
Yet, the transformative potential of this evolution isn’t confined to hardware alone. On the software frontier, an array of untold experiences awaits. Picture a world where maps and navigation seamlessly unfold over the real world, where video calls manifest as perfect holograms within chosen realms. Real-time captions materialize above individuals in a room, transcending communication barriers brought by language itself. The very act of conceiving and exploring imagined worlds transforms into tangible, real 3D models. Most intriguingly, individuals could replay their dreams, unlocking a realm where the boundaries between reality and imagination blur. In the span of five years, the convergence of hardware and software unfolds a narrative of technological transcendence, propelling virtual reality headsets not merely as gadgets but as portals to realms where the extraordinary becomes commonplace.
Nothing I’ve detailed is in the realm of fiction. In fact, most of it is either being tested, or is already in use by hobbyists. The question is not if such technologies will be available, it is when. I’m quite confident to predict most of not all I’ve mentioned will come to pass for mass use before 2030. However, the waiting period might prove shorter, given the ongoing exploration of even more wondrous technologies within research laboratories. These innovations, surpassing the scope of my present discussion, beckon towards the ultimate ‘holy grail’—invasive Brain-Machine Interfaces (BMIs). This pinnacle of technology, capable of reading and inscribing data directly into the mind, not only facilitates reality augmentation but promises the complete and flawless disconnection from our current reality to any other conceivable realm. The strides taken presently lay the groundwork for such advancements. As we traverse this path towards such a novel end of this branch of the technological tree, one might only imagine what such an arrival would be called. I will call it Simulated Reality.