Google wants to fight deepfakes with a special badge| Trending Viral hub

In just a few years, AI-generated deepfakes of celebrities and politicians have moved from the confines of academic journals to the trending pages on major social media sites. Disinformation experts warn about these tools, when combined with tense moderation teams on social media platforms, could add a layer of chaos and confusion to an already contentious 2024 election season.

Now, Google is officially joining a rapidly growing coalition of technology and media companies working to standardize a digital badge that reveals whether or not images were created using generative AI tools. If widely implemented, the “Content Credential” spearheaded by the Coalition for Content Provenance and Authenticity (C2PA) could help bolster consumer confidence in the provenance of photos and videos amid a increase in AI-generated deceptions political falsifications spreading on the internet. Google will join C2PA as a steering member this month, putting them in the same company as Adobe, Microsoft, Intel and the BBC.

In an email, a Google spokesperson said pop science That the company is currently exploring ways to use the standard across its suite of products and will have more to share “in the coming months.” The spokesperson says Google is already exploring adding content credentials to “About this image” feature in Google image search. Google’s support for these credentials could increase their popularity, but their general use remains voluntary in lieu of any binding federal deepfake legislation. That lack of consistency gives deepfake creators an advantage.

What are content credentials?

The (C2PA) is a global standards body created in 2019 with the main objective of creating technical standards to certify who, where and how digital content was originally created. Adobe, which led the Content Authenticity Initiative (CAI)and its partners were already concerned about the ways AI-generated media could erode public trust and amplify misinformation online years before massively popular consumer generative AI tools like OpenAI’s DALL-E gained momentum.

That concern catalyzed the creation of Content credentials, a small badge that companies and creators can choose to attach to an image’s metadata that reveals who created it and when it was created. It also reveals to viewers whether the digital content was created using a generative AI model and even names the particular model used, as well as whether it was subsequently digitally edited or modified.

Supporters of Content Credential argue that the tool creates a record of “tamper-proof metadata” that travels with digital content and can be verified at any point in its lifecycle. In practice, most users will see this “transparency icon” appear as a small badge with the letters “CR” in the corner of the image. Microsoft, Intel, ARM and the BBC are also all members of the C2PA steering committee.

“With digital content becoming the de facto media, along with the rise of AI-based creation and editing tools, audiences urgently need transparency behind the content they encounter at home, in schools, on the go. work, wherever you are. Adobe general counsel and chief trust officer Dana Rao said in a statement sent to Pop science. “In a world where all digital content can be false, we need a way to prove what is true.”

Users who find an image pinned with the content badge can click the badge to inspect when it was created and any edits that may have occurred since then. Each new edit is then linked to the original manifest of the photo or video that travels with it across the web.

If a journalist were to crop a photo that was previously Photoshopped, for example, both changes to the images would be noted in the final manifesto. CAI says the tool won’t prevent anyone from taking a screenshot of an image; However, that screenshot would not include CAI metadata from the original file, which could be a clue to viewers that it was not the original file. The symbol is visible in the image, but it is also included in its metadata which, in theory, should prevent a rioter from using Photoshop or another editing tool to remove the insignia.

If an image does not have a visible badge, users can copy it and upload it to this Content Credentials Check Link to inspect your credentials and see if they have been altered over time. If the media was edited in a way that did not meet C2PA specifications during any part of its lifecycle, users will see a “missing” or “incomplete” marker. The Content Credential feature dates back to 2021. Adobe has since created it available for Photoshop users and creators who produce images using Adobe Firefly AI Image Generator. microsoft plan to wear the badge with images created by their Bing AI image generators. Meta, owner of Facebook and Instagram, Similarly it announced that it would add a new feature. to allow users to disclose when they share AI-generated videos or audio on their platforms. Meta said it would begin applying these labels “in the coming months.”

Why it is important for Google to join C2PA

Google’s involvement in C2PA is important, first and foremost, because of the search giant’s enormous online digital footprint. The company is already exploring ways to use these badges across its wide range of online products and services, including YouTube. The C2PA believes Google’s involvement could put the credentials in front of more eyes, which could lead to greater awareness of the tool as a viable way to verify digital content, especially as political deepfakes and manipulated media gain traction. online. Rao described the Google partnership as a “watershed moment” in raising awareness about Content Credentials.

“Google’s industry expertise, deep research investments, and global reach will help us strengthen our standard to address the most pressing issues around content provenance and reach even more consumers and creators everywhere.” “Rao said. “With the support and adoption of companies like Google, we believe content credentials can become what we need: a simple, harmonized and universal way to understand content.”

The partnership comes three months after Google announced that it would attach a digital watermark with SynthID to audio created using their DeepMind AI Lyrica model. In that case, DeepMind says the audio watermark should not be audible to the human ear and, similarly, should not disrupt the user’s listening experience. Instead, it should serve as a more transparent safeguard to protect musicals from AI-generated replicas of themselves or to demonstrate whether a questionable clip was genuine or AI-generated.

The confusion caused by deepfakes could worsen the already controversial 2024 elections

Technology and media companies are racing to establish reliable ways to verify the provenance of online digital media before misinformation hits. Experts warn that the 2024 election cycle could be mind-blowing. Major political figures, such as Republican presidential candidate Donald Trump and Florida Governor Ron DeSantis, have already used generative artificial intelligence tools to attack each other. More recently, in New Hampshire, AI vocal cloning technology was used to do so It seems as if President Joe Biden is calling residents urging them not to vote in the January primary elections. Since then, the state attorney general’s office has robocalls linked to two Texas-based companies.

But the threats also extend beyond the elections. For years, researchers have warned that the rampant proliferation of increasingly convincing AI-generated deepfake images and videos online could lead to a phenomenon called “The Liar’s Dividend” where consumers doubt whether something they see online is really what it seems. Lawyers, politicians and police have The legitimized images and videos that were already falsely claimed were generated by AI. to try to win a case or seal a conviction.

Content credential aids could help, but are missing elements

Even with Google’s support, Content Credentials remain entirely voluntary. Neither Adobe nor any regulatory body is forcing technology companies or their users to properly add provenance credentials to their content. And even if Google and Microsoft use these markers to reveal content created with their own AI generators, there is currently nothing stopping political bad actors from inventing a deepfake using other open source AI tools and then trying to spread it via social media.

In the United States, the Biden Administration has instructed the Department of Commerce to create new guidelines for AI watermarks and security standards that technology companies building generative AI models would have to meet. Lawmakers in Congress have proposed federal legislation require AI companies to include identifiable watermarks on all AI-generated content, although it is unclear whether or not that would work in practice.

Tech companies are working quickly to implement safeguards against deepfakes, but with less than seven months until a major presidential election, experts agree that it is likely Confusing or misleading AI material will likely play some role..



Check Also

Los patrones de recombinación en serpientes revelan un tira y afloja entre PRDM9 y características similares a las de un promotor | Science| Trending Viral hub

The evidence of the corn snake suggests that the zinc finger protein of the PR …

‘Odysseus’ lands on the lunar surface| Trending Viral hub

A robotic spacecraft made history Thursday by becoming the first privately built craft to land …

Roger Guillemin dies, 100 years old, Nobel-winning scientist agitated by rivalries| Trending Viral hub

Roger Guillemin, a neuroscientist who co-discovered the unexpected hormones with which the brain controls many …

Leave a Reply

Your email address will not be published. Required fields are marked *