An artificial intelligence researcher confronts electoral deepfakes | Trending Viral hub

[ad_1]

For almost 30 years, Oren Etzioni was among the most optimistic artificial intelligence researchers.

But in 2019, Dr. Etzioni, a professor at the University of Washington and founding executive director of the Allen Institute for AI, became one of the first researchers to warn that a new generation of AI accelerate the spread of misinformation online. And in the middle of last year, he said, he was concerned that AI-generated deepfakes could influence an important election. He founded a non-profit organization, TrueMedia.org in January, hoping to combat that threat.

On Tuesday, the organization launched free tools to identify digital misinformation, with a plan to put them in the hands of journalists, fact-checkers and anyone else trying to figure out what’s real online.

The tools, available in the TrueMedia.org website for anyone approved by the nonprofit, are designed to detect fake and manipulated images, audio, and video. They review links to media files and quickly determine if they should be trusted.

Dr. Etzioni sees these tools as an improvement over the mosaic defense currently used to detect misleading or deceptive AI content. But in a year in which billions of people around the world are ready to vote in the election, continues to paint a bleak picture of what lies ahead.

“I’m terrified,” he said. “There’s a good chance we’ll see a tsunami of misinformation.”

In just the first few months of the year, AI technologies helped create fake voice calls from president biden, Fake images of Taylor Swift. and audio adsand a entire fake interview that appeared to show a Ukrainian official taking credit for a terrorist attack in Moscow. Detecting this type of misinformation is already difficult, and the technology industry continues to launch increasingly powerful artificial intelligence systems that will generate deepfakes increasingly convincing and make detection even more difficult.

Many artificial intelligence researchers warn that the threat is gaining strength. Last month, more than a thousand people, including Dr. Etzioni and several other prominent AI researchers, signed an open letter calling for laws to hold developers and distributors of AI visual and audio services accountable if their technology is easily used to create harmful deepfakes.

At an event organized by Columbia University On Thursday, Hillary Clinton, former secretary of state, interviewed Eric Schmidt, former Google CEO, who warned that videos, even fake ones, could “drive voting behavior, human behavior, moods, everything.”

“I don’t think we’re prepared,” Schmidt said. “This problem is going to get much worse in the coming years. Maybe or not for November, but certainly in the next cycle.”

The technology industry is well aware of the threat. Even as companies rush to advance generative AI systems, they are struggling to limit the damage what these technologies can do. Anthropic, Google, Meta and OpenAI have announced plans to limit or label election-related uses of its AI services. In February, 20 tech companies, including Amazon, Microsoft, TikTok and X, signed a voluntary pledge to prevent misleading AI content from disrupting voting.

That could be a challenge. Companies often publish their technologies as “open source” software, i.e. Anyone is free to use and modify them without restrictions.. Experts say the technology used to create deepfakes – the result of huge investment by many of the world’s largest companies – will always outperform technology designed to detect disinformation.

Last week, during an interview with The New York Times, Dr. Etzioni showed how easy it is to create a deepfake. Use a service from a non-profit sister organization, CivAIwhich relies on artificial intelligence tools available on the Internet to demonstrate the dangers of these technologies, instantly created photos of himself in prison, in a place he has never been.

“When you see yourself being fooled, it’s a lot scarier,” he said.

He later generated a deepfake of himself in a hospital bed, the kind of image he believes could influence an election if applied to Biden or former President Donald J. Trump just before the election.

A deepfake image created by Dr. Etzioni of himself in a hospital bed.Credit…via Oren Etzioni

TrueMedia tools are designed to detect fakes like these. More than a dozen startups offer similar technology.

But Dr. Etzoini, while highlighting the effectiveness of his group’s tool, said no detector was perfect because they were driven by probabilities. Deepfake detection services they have been deceived to declare images of kissing robots and giant Neanderthals as real photographs, raising concerns that such tools could further damage society’s trust in facts and evidence.

When Dr. Etizoni fed TrueMedia’s tools a known deepfake of Mr. Trump sitting on a porch with a group of young black men, they labeled it “highly suspicious,” his highest level of confidence. When another well-known deepfake of Trump with blood on his fingers was uploaded, they weren’t “sure” if it was real or fake.

TrueMedia’s tool called an AI deepfake of former President Donald J. Trump sitting on a porch with a group of young black men “highly suspicious.”
But a deepfake of Trump with blood on his fingers was labeled “uncertain.”

“Even using the best tools, you can’t be sure,” he said.

The Federal Communications Commission recently AI-generated robocalls banned. Some companies, including OpenAI and Meta, are now watermarking AI-generated images. And researchers are exploring additional ways to separate the real from the fake.

The University of Maryland is developing a cryptographic system based on QR codes to authenticate unaltered live recordings. TO study published last month asked dozens of adults to breathe, swallow and think while they spoke so that their speech pause patterns could be compared to the rhythms of the cloned audio.

But like many other experts, Dr. Etzioni warns that image watermarks are easily removed. And while he has dedicated his career to fighting deepfakes, he acknowledges that detection tools will have a hard time surpassing new generative AI technologies.

Since creating TrueMedia.org, OpenAI has introduced two new technologies that promise to make your job even harder. You can recreate a A person’s voice from a 15-second recording.. Another can generate full motion videos that It looks like something out of a Hollywood movie.. OpenAI is not yet sharing these tools with the public as it works to understand the potential dangers.

(The Times has defendant OpenAI and its partner, Microsoft, over copyright infringement claims involving artificial intelligence systems that generate text).

Ultimately, Dr. Etzioni said, combating the problem will require broad cooperation between government regulators, companies that create artificial intelligence technologies, and the tech giants that control the web browsers and social networks where misinformation spreads. However, he said the likelihood of that happening before the fall election was slim.

“We’re trying to give people the best technical assessment of what’s in front of them,” he said. “They still have to decide if it’s real.”

[ad_2]

Check Also

iPhone 16 Pro: New feature will reportedly fix this annoying camera issue | Trending Viral hub

[ad_1] We have not yet seen the iPhone 16but we continue to learn more about …

Google workers protest cloud contract with Israeli government | Trending Viral hub

[ad_1] Dozens of Google employees began to occupy company offices in New York City and …

US Senate to vote on wiretapping bill critics call ‘Stasi-like’ | Trending Viral hub

[ad_1] The United States Senate is about to vote on legislation this week that, at …

Leave a Reply

Your email address will not be published. Required fields are marked *