How to stop AI from stealing the sound of your voice | Trending Viral hub

[ad_1]

A new technology called AntiFake prevents the theft of the sound of your voice by making it difficult for artificial intelligence tools to analyze vocal recordings

Robotic hand holding a glass dome with a voice sample.

Advances in generative artificial intelligence have enabled authentic-sounding speech synthesis to the point that a person can no longer distinguish whether they are speaking to another human or a deep. If a third party “clones” a person’s own voice without their consent, malicious actors can use it to send any message you want.

This is the other side of the coin of a technology that could be useful for creating digital personal assistants or avatars. The potential for misuse when cloning real voices with deep voice software is obvious: synthetic voices can be easily abused to fool others. And only a few seconds of vocal recording can be used to convincingly clone a person’s voice. Anyone who sends voicemails, even occasionally, or talks on an answering machine has already provided the material world with more than enough to be cloned.

Computer scientist and engineer Ning Zhang of the McKelvey School of Engineering at Washington University in St. Louis has developed a new method to prevent unauthorized speech synthesis before it takes place: a tool called AntiFake. Zhang made a presentation about it. at the Association for Computing Machinery Conference on Computer and Communications Security in Copenhagen, Denmark, on November 27.

Conventional methods for detecting deepfakes only take effect once the damage is done. AntiFake, on the other hand, prevents the synthesis of voice data into deepfake audio. The tool is designed to beat digital forgers at their own game: it uses techniques similar to those used by cybercriminals for cloning voices to protect them from hacking and forgery. He source text of the AntiFake project It is freely available.

Anti-deepfake software is designed to make it difficult for cybercriminals to take voice data and extract features from a recording that are important for speech synthesis. “The tool uses an adversarial AI technique that was originally part of the cybercriminals’ toolbox, but we are now using it to defend against them,” Zhang said at the conference. “We muddle the recorded audio signal a bit, distort or disturb it enough so that it still sounds good to human listeners,” while making it unusable for training a voice clone.

Similar approaches already exist for copy protection of works on the Internet. For example, images that still appear natural to the human eye may have information that machines cannot read due to an invisible alteration in the image file.

Software called Glaze, for example, is designed to render images useless for machine learning of large AI models, and certain Tricks protect against facial recognition in photographs. “AntiFake makes sure that when we publish voice data, it is difficult for criminals to use that information to synthesize our voices and impersonate us,” Zhang said.

Attack methods are constantly improving and becoming more sophisticated, as evidenced by the current increase in automated cyber attacks on businesses, infrastructure, and governments around the world. To ensure that AntiFake can keep up with the ever-changing environment surrounding deepfakes for as long as possible, Zhang and his PhD student Zhiyuan Yu have developed their tool in such a way that it is trained to prevent a wide range of possible threats.

Zhang’s lab tested the tool with five modern speech synthesizers. According to the researchers, AntiFake achieved a 95 percent protection rate, even against unknown commercial synthesizers for which it was not specifically designed. Zhang and Yu also tested the usability of their tool with 24 human participants from different population groups. More testing and a larger test group would be necessary for a representative comparative study.

Ben Zhao, a computer science professor at the University of Chicago, who was not involved in the development of AntiFake, says the software, like all digital security systems, will never provide complete protection and will be threatened by the persistent ingenuity of fraudsters. . But, he adds, it can “raise the bar and limit the attack to a smaller group of highly motivated people with significant resources.”

“The more difficult and challenging the attack, the fewer cases we will hear about voice imitation scams or fake audio clips used as an intimidation tactic in schools. And that is a great result of the research,” says Zhao.

AntiFake can now protect shorter voice recordings against spoofing, the most common means of cybercriminal forgery. The tool’s creators believe it could be expanded to protect larger audio or music documents from misuse. Currently, users would have to do it themselves, which requires programming knowledge.

Zhang said at the conference that the intention is to fully protect voice recordings. If this becomes a reality, we will be able to exploit a major gap in the security-critical use of AI to fight deepfakes. But the methods and tools that are developed must continually be adapted due to the inevitability of cybercriminals learning and growing with them.

This article originally appeared on spectrum of science and was reproduced with permission.

[ad_2]

Check Also

Like moths to a flame? We may need a new phrase. | Trending Viral hub

[ad_1] It used to be that you could put a black light on the edge …

Scotland made big climate promises. They are now “out of reach.” | Trending Viral hub

[ad_1] Climate promises are difficult to keep. Scotland is the most recent, perhaps most surprising, …

Heavy rains cause rare flooding in Dubai | Trending Viral hub

[ad_1] Heavy rain lashed parts of the Middle East on Tuesday, closing schools in the …

Leave a Reply

Your email address will not be published. Required fields are marked *