in ,

Twitter Spaces is rife with abuse, according to report

One of the most difficult aspects of running a social media platform is preventing its abuse and misuse. For this reason, the operators usually devote lots of resources to keeping it safe for everybody. Every new feature to be introduced has to be screened carefully to block potential abuse.

This, it seems, was not the case when Twitter added a new feature this year. When the social media platform rolled out Spaces, where people can come together for an audio chat, some members of staff wanted to know how their employer was going to make it safe. Apparently, Twitter had none.

Now, the worst fears of the concerned employees, as users have turned Spaces into a platform for spreading hate speech, bullying, and advocating for violence.

Spaces was Twitter’s rushed response to the popularity of Clubhouse, where people, including celebrities, came together for audio chats. Clubhouse got new members through invitations from existing members.

Twitter executives knew the dangers since they have been running social media for a long time. Kayvon Beykpour, a senior Twitter employee, admitted users would violate the terms and conditions, according to anonymous Twitter staff that spoke to The Washington Post. They also had warnings from their own staff.

However, the leadership team was focused on what they claimed would revive Twitter. Spaces looked like a likely candidate and they decided to release the feature anyway.

Less than a year after, Twitter is now having problems in its hands. Spaces has become a cesspit where some of the most dangerous groups spread their messages. The roster includes supporters of terrorist organizations, white nationalists, anti-vaxxers, etc. Some of these actors command audiences of hundreds of listeners when they speak, as confirmed by The Washington Post.

As incredible as it may sound, Spaces do not have any form of moderation by humans or software. Twitter admits it does not have the technology that analyzes audio in real-time.

Twitter users have noticed these dangerous groups or individuals misusing Spaces and have posted about them. A user posted about how recruiters for a recognized terrorist group have been hosting regular sessions and openly calling for the genocide of a group of people. Other users confirm that some regular hosts declared their allegiance to terrorist groups and argue that some people deserve to die.

Spaces was launched under the leadership of Jack Dorsey, who resigned recently as the CEO of Twitter. He had been accused of being absent at Twitter, with an investor attempting to oust him for the second time. Ironically, the pressure made Twitter push out features even before they were ready because the agreement that allowed Dorsey to remain as CEO stipulated a faster product roadmap. This meant they would learn to manage features while already being used.

Twitter is not the only social media company that has ignored warnings about how dangerous its products are. Meta (previously referred to as Facebook), for example, ignored internal findings that one of its platforms harmed vulnerable members of the society mentally, prompting a summon to appear before Congress.

Reportedly, those who raised the alarm at Twitter concerning Spaces were sidelined. This will only encourage employees to look the other way when issues like this come up again.

Alarmingly, it does not seem Twitter is ready to slow down. The new CEO has hammered on becoming a faster company with respect to product releases. Parag Agrawal has restructured the organization with that goal in mind. Two top executives are on their way out, including one brought on board to help with the toxicity users on the platform face. He is replaced with the same executive that acknowledged the dangers on Spaces but decided to move ahead anyway.

While addressing the Spaces situation, company spokesperson, Viviana Wiewall, acknowledged Twitter cannot scan audio in real-time to pick offensive words. “Ensuring people’s safety and encouraging healthy conversations, while helping hosts and listeners to control their experience, have been key priorities since the beginning of [Spaces’] development. We’re exploring avenues in this regard, but it’s not something that we have available at this time.”

Wiewall noted that Twitter could scan Spaces chat names for trigger words. Evidence shows this safety precaution is seriously lacking, though. Many Spaces chats with clearly offensive titles exist and command large audiences.

As another sign of failure, Twitter’s algorithm promotes these offensive chats, increasing their popularity.

Source: The Washington Post

Written by HackerVibes

Buying a long range electric vehicle? Check out this list!

DIY methods to keep your plants watered while you are away on vacation