It was 2016, and Jordan Belamire was excited to experience QuiVr, a new fantastical virtual reality game, for the first time. With her husband and brother-in-law looking on, she put on a VR headset and became immersed in a snowy landscape. Represented by a disembodied set of floating hands along with a quiver, bow, and hood, Belamire was now tasked with taking up her weapons to fight mesmerizing hordes of glowing monsters.
But her excitement quickly turned sour. Upon entering online multiplayer mode and using voice chat, another player in the virtual world began to make rubbing, grabbing, and pinching gestures at her avatar. Despite her protests, this behavior continued until Belamire took the headset off and quit the game.
My colleagues and I analyzed responses to Belamire’s subsequent account of her “first virtual reality groping” and observed a clear lack of consensus around harmful behavior in virtual spaces. Though many expressed disgust at this player’s actions and empathized with Belamire’s description of her experience as “real” and “violating,” other respondents were less sympathetic—after all, they argued, no physical contact occurred, and she always had the option to exit the game.
Incidents of unwanted sexual interactions are by no means rare in existing social VR spaces and other virtual worlds, and plenty of other troubling virtual behaviors (like the theft of virtual items) have become all too common. All these incidents leave us uncertain about where “virtual” ends and “reality” begins, challenging us to figure out how to avoid importing real-world problems into the virtual world and how to govern when injustice happens in the digital realm.
Now, with Facebook predicting the coming metaverse and the proposal to move our work and social interactions into VR, the importance of dealing with harmful behaviors in these spaces is drawn even more sharply into focus. Researchers and designers of virtual worlds are increasingly setting their sights on more proactive methods of virtual governance that not only deal with acts like virtual groping once they occur, but discourage such acts in the first place while encouraging more positive behaviors too.
These designers are not starting entirely from scratch. Multiplayer digital gaming—which has a long history of managing large and sometimes toxic communities—offers a wealth of ideas that are key to understanding what it means to cultivate responsible and thriving VR spaces through proactive means. By showing us how we can harness the power of virtual communities and implement inclusive design practices, multiplayer games help pave the way for a better future in VR.
The laws of the real world—at least in their current state—are not well-placed to solve the real wrongs that occur in fast-paced digital environments. My own research on ethics and multiplayer games revealed that players can be resistant to “outside interference” in virtual affairs. And there are practical problems, too: In fluid, globalized online communities, it’s difficult to know how to adequately identify suspects and determine jurisdiction.
And certainly, technology can’t solve all of our problems. As researchers, designers and critics pointed out at the 2021 Game Developers Conference, combatting harassment in virtual worlds requires deeper structural changes across both our physical and digital lives. But if doing nothing is not an option, and if existing real-world laws can be inappropriate or ineffective, in the meantime we must turn to technology-based tools to proactively manage VR communities.
Right now, one of the most common forms of governance in virtual worlds is a reactive and punitive form of moderation based on reporting users who may then be warned, suspended, or banned. Given the sheer size of virtual communities, these processes are often automated: for instance, an AI might process reports and implement the removal of users or content, or removals may occur after a certain number of reports against a particular user are received.