October 6, 2025 at 9:55 am

Researchers Created A Social Network With AI Bots To Try And Solve Online Toxicity. It Failed.

by Michael Levanduski

Social media interaction icons

Shutterstock

For those of a certain age, it is easy to remember the early days of social media when it was used almost exclusively to share pictures of family, get in touch with friends, share music, and generally talk about things in a nice and constructive way. I say ‘early days’ because that is pretty much how long it lasted.

Almost immediately, people began arguing about politics, sports, games, and anything else that people could possibly disagree on. Shortly later, the echo chambers formed, which helped to radicalize people on virtually every topic.

Today, social media is a toxic virtual environment where people focus on the worst in each other and get offended when someone doesn’t see them in the best possible light. To make it worse, the social media companies benefit from this environment of outrage because it keeps users glued to their phones (and seeing the ads).

While everyone agrees that social media has devolved into a terrible place, nobody wants to stop using it, and nobody can seem to fix it.

So, researchers thought they would do what they do best. Use science and technology to try to solve the problem.

Young women on social media

Shutterstock

A team of researchers created a simulated social media platform and populated it exclusively by AI chatbots. These bots were powered by ChatGPT 4.0. They could use this platform to test strategies and hopefully come up with an effective way to fix what is wrong with social media without losing what is good about it.

Spoiler alert, their study (which hasn’t yet been peer reviewed) shows that they not only failed to fix the problems on their virtual network, but in some cases, they actually made it worse.

The team implemented six strategies that they predicted would help to keep a social media site from becoming polarized and filled with negativity. They were moving to chronological news feeds, hiding social statistics (number of followers, etc), boosting diverse viewpoints (to fight against echo chambers), removing account bios, and more.

While they had some slight improvements with a few efforts, overall, they failed miserably, and in some cases, made things worse. Petter Tornberg is an AI and social media assistant professor, and Maik Larooij is a research assistant who conducted the study. In an interview with Ars Technica, Tornberg talked about the initial goal of the study, asking:

“Can we identify how to improve social media and create online spaces that are actually living up to those early promises of providing a public sphere where we can deliberate and debate politics in a constructive way?

Part of the problem is that the platforms are motivated by keeping users engaged, and there is no better way to do that than to keep them angry.

Tornberg explains:

“We already see a lot of actors — based on this monetization of platforms like X — that are using AI to produce content that just seeks to maximize attention. So misinformation, often highly polarized information — as AI models become more powerful, that content is going to take over.”

While their study may have had the best of intentions, it largely just revealed the fact that there is no easy solution to this problem. Also, it is just going to get worse as AI bots continue to flood these systems, many with the specific goal of ‘rage baiting’ human users so that they stay online even longer, generating more ad revenue for the social media companies.

In the end, it will be up to each individual to do their part in trying to make social media (and all of the world) better, not worse.

If you thought that was interesting, you might like to read about a quantum computer simulation that has “reversed time” and physics may never be the same.