why-algorithms-promote-violent-content-among-young-menWhy algorithms promote violent content among young men

It was 2022 when Cai, then 16, was rapidly scrolling through pictures and messages on his phone screen. He says one of the first videos he saw on his social media feed was one of a cute dog.

He says that “out of the blue” he was recommended videos of someone being hit by a car, a monologue by a influencer sharing misogynistic opinions, and others of violent fights. He wondered why they are showing them to me?

Meanwhile, in Dublin, Andrew Kaung was working as a user safety analyst at TikTok, a position he held from December 2020 to June 2022.

He says he and a colleague decided to look at what the app’s algorithms were recommending to users in the UK, including those aged 16. He previously worked at Meta, the company that owns Instagram, another of the sites Cai uses.

When Andrew analysed TikTok content, he was alarmed to see how the platform was showing some teenage boys videos containing violence and pornography, and promoting misogynistic ideas, he told the BBC’s Panorama programme.

She says that, in general, teenage girls were recommended very different content based on their interests.

TikTok and other social media platforms use artificial intelligence (AI) tools to remove the vast majority of harmful content and flag other content for review by human moderators. But AI tools can’t identify everything.

Andrew Kaung recounts that during his time at TikTok, all videos that were not removed or flagged by AI for review by human moderators — or reported by other users to moderators — were only reviewed manually if they reached a certain view threshold.

At the time, that threshold was set at 10,000 views or more, he says. That meant some young users were being exposed to harmful videos. Most social media companies allow people to sign up as young as 13.

TikTok says 99% of the content it removes for violating its rules is flagged by AI or human moderators before it reaches 10,000 views. It adds that it conducts proactive investigations into videos that reach fewer than that number of views.

Andrew Kaung says he raised concerns that violent and misogynistic content was being forced on teenage boys. (Photo: BBC)

While working at Meta, Andrew Kaung noticed a different problem. While most videos were taken down or flagged to moderators by AI tools, the site also relied on users to report some videos once they had seen them.

She says she reported this situation while at both companies, but mostly nothing was done because of the amount of work involved or the high cost. She says TikTok and Meta later made some improvements, but stresses that young users like Cai were at risk of seeing harmful content during that time.

Several former employees of social media companies have told the BBC that Andrew Kaung’s concerns echoed their own experiences.

The algorithms of most social media companies have been recommending harmful content to minors, even if it was not intentional, Ofcom, the UK’s communications regulator, told the BBC.

“Companies have turned a blind eye and have been treating children the same way they treat adults,” said Almudena Lara, Ofcom’s director of online safety policy development.

“The image gets stuck in your head and you can’t get it out”

TikTok told the BBC it is at the “forefront of the industry” in terms of safety settings for teenagers and employs more than 40,000 people to ensure user safety.

It says it expects to invest “more than $2 billion in security” this year alone, and that 98% of the content it removes for violating its rules is detected proactively.

Meta, which owns Instagram and Facebook, says it has more than 50 different tools, resources and features to offer teens “positive, age-appropriate experiences.”

Cai told the BBC that he tried using one of Instagram’s tools and a similar one on TikTok to indicate that he was not interested in violent or misogynistic content, but it continued to be recommended to him.

“The image gets stuck in your head and you can’t get it out. It messes up your brain. So you think about it the rest of the day,” he says.

Girls her age that she knows are recommended videos about music and makeup, rather than violence, she says.

Cai says that one of his friends was attracted by the content of a influencer controversial. (Photo: BBC)

Cai, who is now 18, says she continues to receive recommendations for violent and misogynistic content on both Instagram and TikTok.

When we look at her Instagram recommendations, they include an image that downplays domestic violence. It shows two people side by side, one of whom has bruises, with the caption: “My love language.” Another shows a person being hit by a truck.

Cai says he has noticed how videos with millions of likes can be persuasive to other young people his age.

For example, he notes that one of his friends was drawn to the content of a influencer controversial and began to adopt misogynistic views.

His friend “went overboard,” Cai says. “He started saying things about women. You feel like you have to bring your friend back down to earth.”

Cai says he has commented on content to indicate that he doesn’t like it, and when he has accidentally hit “like” on a video, he has backed out in hopes that the algorithms will recalibrate. But he says it has ended up with many more videos like that on his screen.

Ofcom says social media companies are recommending harmful content to children. (Photo: BBC)

The fuel of algorithms

So how do algorithms operate?

According to Andrew Kaung, the fuel of algorithms is interaction, regardless of whether it is positive or negative. That would partly explain why Cai’s efforts to manipulate algorithms did not work.

The first step users must take is to specify their tastes and interests when they sign up for a social network. Andrew explains that the content that an algorithm initially serves to, say, a 16-year-old, is based on the preferences they choose and the preferences of other users of a similar age and who are in a similar place.

According to TikTok, the algorithms do not take into account the gender of the user. But Andrew points out that the interests that teenagers express when they subscribe often have the effect of dividing them based on their gender.

The former TikTok employee says some 16-year-old boys may be exposed to violent content “immediately” because other teenage users with similar preferences have expressed interest in this type of content, even if that is because they stick around for a video that catches their attention a little longer.

The interests indicated by many teenage girls in the profiles she examined – “pop singers, songs, makeup” – meant that violent content was not recommended to them, she says.

He says the algorithms use “reinforcement learning,” a method by which AI systems learn through trial and error, and train themselves to detect user behavior with different videos.

Andrew Kaung explains that algorithms are designed to maximize engagement, showing videos they hope users will watch longer, comment on or like, all with the intention of keeping them coming back for more content.

The algorithm that recommends content for TikTok’s “For Your Page” section doesn’t always discriminate between harmful and non-harmful content, he says.

The content that an algorithm initially offers to the teenager is based on the preferences they select when they subscribe to the platform. (Photo: Getty Images)

According to Andrew, one of the problems he identified when working at TikTok was that the teams involved in training and coding that algorithm didn’t always know the exact nature of the videos it was recommending.

“They see the number of users, the age, the trend, that kind of very abstract data. They are not necessarily exposed to the content,” says the former TikTok analyst.

So in 2022, he and a colleague decided to analyze what types of videos were being recommended to a range of users, including some as young as 16.

They were concerned about the violent and harmful content being offered to some teenagers, he says, and proposed that TikTok update its moderation system.

They wanted TikTok to clearly label videos so that everyone working there could see why they were harmful – for extreme violence, abuse, pornography, and so on – and to employ more moderators who specialised in these different areas. Andrew says his suggestions were rejected at the time.

TikTok says it had specialized moderators at the time and that as the platform has grown, it has continued to hire more. It also said it separated different types of harmful content, into what it calls queues, for moderators.

“How to ask a tiger not to eat you”

Andrew Kaung says that while working on TikTok and Meta, it seemed very difficult to be able to make the changes he thought were necessary.

“We are asking a private company, whose interest is to promote its products, to moderate itself. It is like asking a tiger not to eat you,” he says.

She believes that the lives of children and teenagers would be better if they stopped using their smartphones.

But for Cai, banning teenagers from using phones or social media is not the solution. Their phones are an integral part of their lives.

Instead, he wants social media companies to listen to what teens don’t want to see. He wants platforms to create tools that allow users to signal their preferences more effectively.

“I feel like social media companies don’t respect your opinion if it makes them money,” Cai says.

In the UK, a new law will force social media companies to verify the age of minors and crack down on sites that recommend porn or other harmful content to young people. Regulator Ofcom will be in charge of enforcing the law.

Ofcom, the UK’s communications regulator, will be in charge of enforcing the new law governing social media. (Photo: Getty Images)

Ofcom’s Almudena Lara says that while harmful content that predominantly affects young women – such as videos promoting eating disorders and self-harm – has rightly been under scrutiny, algorithms that promote hate and violence primarily among teenage and young boys have received less attention.

Ofcom says it can fine companies and take them to court if they fail to comply with the law, but the measures it contains will not come into force until 2025.

TikTok claims to use “innovative technology” and to provide “cutting-edge” security and privacy settings for teens, including systems that block content that could be inappropriate, and that do not allow and extreme violence or misogyny.

Meta, which owns Instagram and Facebook, says it has more than “50 different tools, resources and features” to offer teens “positive, age-appropriate experiences.” Meta says it requires reviews from its own teams and that potential policy changes are subject to robust processes.

Continue reading:

* He loses his job because of the messages he sent to a woman on Tinder
* Host who hates women will have to invite feminists to his show
* #YesAllWomen, viral response to misogyny

Click here to read more stories from BBC News Mundo.

You can also follow us on YouTube, Instagram, TikTokX, Facebook and in our new WhatsApp channelwhere you’ll find breaking news and our best content.

And remember that you can receive notifications in our app. Download the latest version and activate them.

  • What happened when 10 teenagers gave up their cell phones for 5 days
  • “They always have an excuse”: why most young people don’t answer the phone
  • What happens in your brain when you’re scrolling on your phone (and 3 tips to avoid compulsive scrolling)

By Scribe