Data Privacy in the Metaverse: Real Threats in a Virtual World

Data Privacy in the Metaverse: Real Threats in a Virtual World
By Alexa Coughlin | October 24, 2022

The metaverse promises to revolutionize the way we interact with each other and the world – but at what cost?

Imagine having the ability to connect, work, play, learn, and even shop all from the comfort of your own home. Now imagine doing it all in 3D with a giant pair of goggles strapped to your face. Welcome to the metaverse.

Dubbed ‘the successor to the mobile internet’, the metaverse promises to revolutionize the way we engage with the world around us through a network of 3D virtual worlds. While Meta (formerly Facebook) is leading the charge into this wild west of virtual reality, plenty of other companies are along for the ride. Everyone from tech giants like Microsoft and NVIDIA, to retailers like Nike and Ralph Lauren is eager to try their hand at navigating cyberspace.×384.jpeg

Boundless as the promise of this new technology may seem, there’s no such thing as a free lunch when it comes to an industry where people (and their data) have historically been the product. The metaverse is no exception.

Through the core metaverse technology of VR and AR headsets, users will provide companies like Meta with access to new categories of personal information that have historically been extremely difficult, if not totally impossible, to track. Purveyors of this cyberworld will have access to extremely sensitive biometric data such as facial expressions, eye movements, and even a person’s gait at their fingertips. Medical conditions, emotions, preferences, mental states, and even subconscious thoughts – it’s all there. All just inferences waiting to be extracted and monetized. Having recently lost $10 billion in ad revenue to changes in Apple’s App Tracking Transparency feature, it’s not hard to fathom what plans a company like Meta might have in store for this new treasure trove of prized data. 

Given the extremely personal and potentially compromising nature of this data, some users have already started thinking about how they might combat this invasion of privacy. Some have chosen to rely on established data privacy concepts like differential privacy (i.e. adding noise to different VR tracking measures) to obfuscate their identities in the metaverse. Others still have turned to physical means of intervention, like privacy shields, to prevent eye movement tracking.

Creative as these approaches to privacy might be, users should not have to rely on the equivalent of statistical party tricks or glorified sunglasses to protect themselves from exploitation in the metaverse. That said, given the extreme variance in robustness of data regulations across the world, glorified sunglasses may not in fact be the worst option for some. For example, while most of this biometrically derived data may classify as ‘special category’ under the broad categorization of the EU’s GDPR regulation, it may not warrant special protections under the narrower definition of Illinois’ BIPA (Biometric Information Privacy Act) for instance. And that’s to say nothing of the 44 U.S. states with no active data protection laws at all.

With the metaverse still in its fledgling stages, awaiting mass market adoption, it is crucial that regulators take this opportunity to develop new laws that will protect users from these novel and specific threats to their personal data and safety. Until then, consumer education on data practices in the metaverse remains hugely important. It’s essential that users stay informed about what’s at stake for them in the real world when they enter the virtual one.