How AI robots could sabotage the 2024 elections around the world| Trending Viral hub


Hate speech, political propaganda, and outright lies are not new problems online, even if election years like this exacerbate them. The use of bots, or automated social media accounts, has made it much easier to spread deliberately incorrect disinformation, as well as inaccurate rumors or other types of misinformation. But the robots that affected past election seasons often produced poorly constructed and grammatically incorrect sentences. Now that large language models (artificial intelligence systems that create text) become increasingly accessible to more people, some researchers fear that automated social media accounts will soon gain a lot. more convincing.

According to a new study published in PNAS Nexus. In it, the researchers project that, based on “previous studies on cyber attacks and automated algorithms,” AI will help spread toxic content on social media platforms almost daily by 2024. The possible consequences, the study authors sayIt could affect election results in more than 50 countries holding elections this year, from India to the United States.

This research mapped connections between groups of bad actors on 23 online platforms that included Facebook and Twitter, as well as specialized communities on Discord and Gab, says the study’s lead author. Neil Johnson, professor of physics at George Washington University. The study found that extremist groups that publish a lot of hate speech tend to form and survive longer on smaller platforms, which generally have fewer resources for content moderation. But your messages can have a much broader reach.


About supporting scientific journalism

If you are enjoying this article, please consider supporting our award-winning journalism by subscribing. By purchasing a subscription, you help ensure the future of impactful stories about the discoveries and ideas that shape our world today.


Many small platforms are “incredibly well connected to each other and internally,” Johnson says. This allows misinformation to bounce around like a billiard ball on 4chan forums and other poorly moderated websites. If malicious content is leaked from these networks to top social sites Like YouTube, Johnson and his colleagues estimate that one billion people are potentially vulnerable to it.

“Social media reduced the cost of spreading information or misinformation. “AI is reducing the cost of producing it,” he says. Zeve Sanderson, executive director of the Center for Politics and Social Media at New York University, who was not involved in the new study. “Now, whether you’re a foreign bad actor or part of a smaller national campaign, you can use these technologies to produce multimedia content that will be somewhat compelling.”

Studies disinformation in previous elections have identified how robots in general can spread malicious content on social media, thereby manipulating online discussions and eroding trust. In the past, robots took messages created by a person or a program and repeated them, but today’s large language models (LLMs) are improving those robots with a new feature: typewritten text that sounds convincingly human. “Generative AI alone is no more dangerous than bots. They are robots further “Generative AI”, says the computational social scientist Kathleen Carley from the School of Computer Science at Carnegie Mellon University. Generative AI and large language models can also be used to write software, making it faster and easier for programmers to code bots.

Many early bots were limited to relatively short posts, but generative AI can make realistic paragraph-long comments, says Yilun Du, a doctorate. student studying generative AI modeling at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory. Currently, AI-generated images or videos are easier to detect than text; With images and videos, Du explains, “you have to get every pixel perfect, so most of these tools are actually very inaccurate in terms of lighting or other effects on the images.” The text, however, is the final challenge. “We don’t have tools with a significant success rate that can identify LLM-generated texts,” Sanderson says.

Still, there are some indications that can Alert the experts to AI-generated writing: overly perfect grammar, for example, or a lack of slang, emotional words, or nuance. “Writing software that shows what’s human-made and what’s not, and doing that kind of testing, is very expensive and very difficult,” Carley says. Although her team has worked on programs to identify the content of the AI ​​bot in specific social media platforms, says the tools are imperfect. And each program would have to be completely remade to work on a different website, Carley adds, because people on X (formerly Twitter), for example, communicate in different ways than Facebook users.

Many experts doubt that AI detection programs (those that analyze text for signs of large language model involvement) can adequately identify AI-generated content. Add watermarks to said material, or filters and guardrails in AI models On their own, they can’t cover all the bases either. “In the space of AI use and disinformation, we are in an arms race” with bad actors, Carley says. “As soon as we find a way to detect it, they find a way to improve it.” Johnson and his colleagues also found that bad actors are likely to abuse basic versions of generative AI, such as GPT-2, which are publicly available and have more flexible content filters than current models. Other researchers predict that the imminent malicious content will not be created with the sophisticated AI of large companies, but will be generated by open source tools created by a few programmers or by individuals.

But bots can evolve together, even with these simpler versions of AI. In previous election cycles, botnets remained close to the margin of social networks. Experts predict that AI-generated misinformation will spread much more this time. It’s not just because AI can produce content faster; The dynamics of social media use have also changed. “Until TikTok, most of the social networks we saw were networks based on friends, followers and social graphs. It used to be that people followed people they were aligned with,” Sanderson explains. Instead, TikTok uses an algorithmic feed that injects content from accounts that users don’t follow, and other platforms have modified their algorithms to follow suit. Plus, as Sanderson points out, it includes topics “that the platform is trying to figure out whether you like them or not,” leading to “a much wider net of content consumption.”

In Sanderson’s previous studies on bots on Twitter, research assistants often labeled an account as a bot or not based on looking at your account activity, including photos and texts that you post or repost. “It was essentially like a kind of Turing test for accounts,” she says. But as the AI ​​generation steadily gets better at removing grammatical irregularities and other meanings from bot content, Sanderson believes the onus will have to fall on social media companies to identify these accounts. These companies have the ability to verify metadata associated with accounts, which outside researchers rarely have access to.

Rather than going after false content itself, some disinformation experts believe finding and containing the people who believe it would be a more practical approach. Du suggests that effective countermeasures could work by detecting activity from certain IP addresses or identifying when there are a suspiciously large number of posts at a given time of day.

This might work because there are “fewer bad actors than bad content,” says Carley. And misinformation peddlers are concentrated in certain corners of the Internet. “We know that a lot of the material comes from a few major websites that link to each other, and the content on those websites is generated by the LLMs,” she adds. “If we can detect the faulty website as a whole, suddenly we will have captured tons of misinformation.” Additionally, Carley and Johnson agree that moderating content at the level of small social media communities (posts from members of specific Facebook pages or Telegram channels, for example) would be more effective than sweeping policies that ban entire categories of content.

However, all is not yet lost for robots. Despite reasonable concerns about AI’s impact on elections, Sanderson and his colleagues recently argued against exaggerate potential damages. The actual effects of increased AI content and robot activity on human behaviors (including polarization, vote choice, and cohesion) still need more research. “The fear I have is that we’re going to spend a lot of time trying to identify that something is happening and assume we know the effect,” Sanderson says. “It could be the case that the effect is not that big, and the bigger effect is fear, so we end up eroding trust in the information ecosystem.”

Check Also

NASA’s crash into an asteroid may have changed its shape| Trending Viral hub

In 2022, when NASA’s $325 million spacecraft slammed into an asteroid called Dimorphos at 14,000 …

Private lunar lander sends first photos of landing| Trending Viral hub

A privately built spacecraft on the moon has beamed back new photographs from the lunar …

NASA’s crash into an asteroid may have changed its shape| Trending Viral hub

In 2022, when NASA’s $325 million spacecraft slammed into an asteroid called Dimorphos at 14,000 …

Leave a Reply

Your email address will not be published. Required fields are marked *