No help for victims of abuse in the metaverse, social experiment finds
Online bullying is nothing new, but the immersion of the metaverse makes it feel even more real. Alwaleed Philanthropies (AP) set out to test how people reacted when someone was being bullied or discriminated against in the metaverse, and its result were disappointing, to say the least.
In April 2023, AP brought “fake” perpetrators and victims into popular metaverse platforms (Decentraland, Sandbox and Spatial) to see how people nearby would react to verbal abuse, and whether or not they would intervene.
The avatars were created to represent different ethnicities and religious backgrounds, and the perpetrators would make harmful comments on this basis. This social experiment found that 70% of bystanders showed no response, and if they intervened, it was an average of 2 minutes and 2 seconds into the altercation.
Users were more likely to intervene if the abuse was aimed at religion (at least one person responded in 50% of experiments), while no users intervened in cases of abuse and bullying targeting race. While this group apathy is disappointing, AP noted that the system for reporting abusive behavior was not robust enough. “Rather than a standardized mechanism for users to escalate anti-social behavior, communities in the metaverse are generally expected to self-organize and safeguard users on their own.”
Currently, very little about the metaverse is standardized, making its many platforms feel disparate and fragmented. Governments are rushing to build metaverse regulations that apply across the board, but not much has been achieved, and nothing has been enforced–as in the early days of the internet, it’s a wild frontier land.
While individual platforms could feasibly manage safety themselves for now, if the metaverse is supposed to be the workplace, playground and headquarters of the future, user safety must be addressed on a broader level. “As the world looks to converge futures, livelihoods, and a new outlook on human connectivity with immersive technology – this experiment highlights the need for a near trillion-dollar industry to invest in developing robust safeguarding mechanisms, to create an online ecosystem that is safe for everyone,” AP noted.
But this goes beyond online behavior. Organizations and schools in the region are exploring avenues to build a presence in the metaverse, so it’s more likely that students and businesspeople alike will regularly spend part of their time in immersive environments. “…The boundary between virtual and reality is likely to become even more tenuous,” an AP spokesperson wrote in an email interview. “Anti-social behavior on virtual platforms is far from self-enclosed. And so, immersive cyberspace must be held to the same standard to avoid discrimination and abusive behavior permeating into the physical world.”
Metaverse has been marketed as a technology that can bring people together from across the world, remotely, like never before. If it’s a space where some groups aren’t welcomed, it may never fulfill that promise.