When Louisiana’s parole board met in October to discuss the possible release of a convicted murderer, it called in a doctor with years of mental health experience to talk about the inmate.
The parole board wasn’t the only group paying attention.
A collection of online trolls took screenshots of the doctor from an online feed of her testimony and edited the images with artificial intelligence tools to make her appear naked. They then shared the doctored files on 4chan, an anonymous message board known for promoting harassment and spreading hate content and conspiracy theories.
It was one of numerous occasions when people on 4chan used new AI-powered tools, such as audio editors and image generators, to spread racist and offensive content about people who had appeared before the parole board, according to Daniel Siegel. , graduate student at Columbia University. which investigates how AI is exploited for malicious purposes. Siegel chronicled activity at the site for several months.
The manipulated images and audio have not spread beyond the boundaries of 4chan, Siegel said. But experts who monitor fringe message boards said the efforts offered a glimpse into how nefarious Internet users could employ sophisticated artificial intelligence tools to power online hate and harassment campaigns in the months and years to come.
Callum Hood, head of research at the Center for Countering Digital Hate, said fringe sites such as 4chan – perhaps the most notorious of all – often gave early warning signs about how new technologies would be used to project extreme ideas. Those platforms, she said, are filled with young people who are “very quick to adopt new technologies” like AI to “project their ideology back into mainstream spaces.”
Those tactics, he said, are often adopted by some users on more popular online platforms.
Below are several issues resulting from the AI tools experts discovered on 4chan and what regulators and tech companies are doing about them.
Artificial Images and AI Pornography
AI tools like Dall-E and Midjourney generate novel images from simple text descriptions. But a new wave of AI image generators are being created for the purpose of creating fake pornography, including removing clothes from existing images.
“They can use AI to just create an image of exactly what they want,” Hood said of online hate and misinformation campaigns.
There is no federal law ban the creation of fake images of people, leaving groups like Louisiana’s parole board scrambling to determine what can be done. The board opened an investigation in response to Mr. Siegel’s findings on 4chan.
“We would definitely take issue with any images produced that portray our board members or any participants in our hearings in a negative light,” said Francis Abbott, executive director of the Louisiana Board of Pardons and Parole Committee. “But we have to operate within the law, and whether it’s illegal or not, that has to be determined by someone else.”
Illinois expanded its law that governs revenge porn to allow targets of non-consensual pornography made by artificial intelligence systems to sue creators or distributors. California, Virginia and New York have It also passed laws prohibiting the distribution or creation of AI-generated pornography without consent.
Late last year, ElevenLabs, an artificial intelligence company, launched a tool that could create a convincing digital replica of someone’s voice saying anything written in the program.
Almost as soon as the tool went live, 4chan users circulated clips of a fake Emma Watson, the British actress, reading Adolf Hitler’s manifesto, “Mein Kampf.”
Using content from Louisiana parole board hearings, 4chan users have since shared fake clips of judges uttering offensive and racist comments about defendants. Many of the clips were generated by ElevenLabs’ tool, according to Siegel, who used an AI voice identifier developed by ElevenLabs to investigate their origins.
ElevenLabs was quick to impose limitsincluding require users to pay before they could gain access to voice cloning tools. But the changes did not appear to slow the spread of AI-created voices, experts said. Dozens of videos have circulated in which fake celebrity voices are used. TikTok and YouTube— many of them share political misinformation.
Since then, some major social media companies, including TikTok and YouTube, have required labels on some AI content. president biden issued an executive order in October calling for all companies to label such content and directed the Commerce Department to develop standards for watermarking and authentication of AI content.
Custom AI tools
As Meta moved to gain a foothold in the AI race, the company adopted a strategy to launch its software code for researchers. The approach, broadly termed “open source”, can accelerate development by giving academics and technologists access to more raw materials to find improvements and develop their own tools.
When the company released Llama, its large language model, to select researchers in February, the code quickly leaked on 4chan. People there used it for different purposes: they modified the code to lower or eliminate railings, creating new chatbots capable of producing anti-Semitic ideas.
The effort previewed how tech-savvy users can modify open source and free-to-use AI tools.
“While the model is not accessible to everyone and some have attempted to circumvent the approval process, we believe the current launch strategy allows us to balance accountability and openness,” a Meta spokeswoman said in an email.
In the months since, language models have been developed to echo far-right talking points or to create more sexually explicit content. The image generators have been modified by 4chan users produce nude images or provide racist memes, evading the controls imposed by large technology companies.