For Taylor Swift, the last months of 2023 were triumphant. Her Eras Tour was named the highest collection concert tour of all time. She premiered a concert film that breathed new life in the genre. And to top it off, Time magazine called your Person of the Year.
But in late January the megastar made headlines for a much less compelling reason: She had become the latest high-profile target of nonconsensual, sexually explicit attacks. deepfake images made using artificial intelligence. Swift fans were quick to report the infringing content, as circulated on social media platforms, including X (formerly Twitter), which temporarily blocked searches of the name Swift. It was not the first case of its kind: women and girls around the world have already faced similar abuse. However, Swift’s prestige helped bring the matter to public attention, and the incident amplified calls for lawmakers to intervene.
“At this point we are too little, too late, but we can still try to mitigate the disaster that is unfolding,” says Mary Anne Franks, a professor at George Washington University Law School and president of the Cyber Civil Rights Initiative. Women are “canaries in the coal mine” when it comes to the abuse of artificial intelligence, she adds. “It won’t just be the 14-year-old girl or Taylor Swift. They will be the politicians. They will be world leaders. “There are going to be elections.”
About supporting scientific journalism
If you are enjoying this article, please consider supporting our award-winning journalism by subscribing. By purchasing a subscription, you help ensure the future of impactful stories about the discoveries and ideas that shape our world today.
Swift, who recently he became a billionaire, could make some progress through individual litigation, Franks says. (Swift’s record label did not respond to a request for comment on whether the artist will file lawsuits or support efforts to crack down on deepfakes.) However, what is really needed, the law professor adds, are regulations that specifically prohibit this type of content. “If legislation had been passed years ago, when advocates were saying this is what’s going to happen with this type of technology, maybe we wouldn’t be in this position,” Franks says. One of those bills What could help victims in the same position as Swift, he notes, is the Intimate Images Deepfake Prevention Act, which New York state Rep. Joe Morelle introduced last May. If signed into law, the legislation would prohibit the sharing of non-consensual deepfake pornography. Another recent proposal in the senate would allow victims of deepfake to sue the creators and distributors of such content for damages.
Advocates have been calling for policy solutions to non-consensual deepfakes for years. There is a patchwork of state laws, but experts say federal oversight is lacking. “There is a dearth of enforceable federal laws” around deepfake adult pornography, says Amir Ghavi, senior AI attorney at law firm Fried Frank. “There are some fringe laws, but generally speaking, there is no direct federal statute on deepfakes.”
However, a federal crackdown might not solve the problem, the lawyer explains, because a law criminalizing sexual deepfakes does not address a big problem: who to charge with a crime. “It is very unlikely, in practice, that those people will be identified,” says Ghavi, noting that forensic studies cannot always prove which software created certain content. And even if law enforcement could identify where the images came from, they could run into something called Section 230, a small but massively influential piece of legislation that says websites are not responsible for what their users post. (It is still unclear, however, whether Section 230 applies to generative AI.) And human rights groups like the American Civil Liberties Union have warned that overly broad regulations could also raise First Amendment concerns for journalists who report on deepfakes or political satirists. those who wield them.
The smartest solution would be to adopt policies that promote “social responsibility” on the part of companies that own generative AI products, says Michael Karanicolas, executive director of the Institute of Technology, Law and Policy at the University of California, Los Angeles. But, he adds, “it is relatively uncommon for companies to respond to anything other than coercive regulatory behavior.” Some platforms have taken steps to curb the spread of AI-generated misinformation about election campaigns, so it’s not unprecedented for them to intervene, Karanicolas says, but even technical safeguards are subject to sophisticated users putting them in place. end.
Digital watermarks, which mark AI-generated content as synthetic, are a possible solution supported by the Biden administration and some members of Congress. And in the coming months, Facebook, Instagram and Threads will begin tagging images created by AI published on those platforms, Meta recently announced. Even if a standardized watermarking regime couldn’t stop people from creating deepfakes, it would still help social media platforms remove them or slow their spread. Moderating web content at this kind of scale is possible, says a former policymaker who regularly advises the White House and Congress on AI regulation, pointing to the success of social media companies in limiting the dissemination of copyrighted media. “Both legal precedent and technical precedent exist to curb the spread of this material,” says the advisor, who requested anonymity, given the ongoing deliberations over deepfakes. Swift, a public figure with a platform comparable to that of some presidents, could make ordinary people start to care about the issue, the former policymaker adds.
For now, however, the legal terrain has few clear milestones, leaving some victims feeling left out. Caryn Marjorie, a social media influencer and self-described “Swiftie,” who launched her own AI chatbot Last year, he says he faced a similar experience to Swift. About a month ago, Marjorie’s fans alerted her to sexually explicit AI-generated deepfakes circulating online.
The deepfakes made Marjorie feel sick; she had trouble sleeping. But although she repeatedly reported the account that posted the images, it remained online. “I didn’t get the same treatment as Taylor Swift,” Marjorie says. “It makes me wonder: Do women have to be as famous as Taylor Swift for these explicit AI images to be removed?”