Staying one step ahead of hackers when it comes to AI| Trending Viral hub

If you’ve been wandering around underground tech forums lately, you may have seen ads for a new program called WormGPT.

The program is an AI-powered tool for cybercriminals to automate the creation of personalized phishing emails; Although it sounds a bit like ChatGPT, WormGPT is No your friendly neighborhood AI.

ChatGPT was launched in November 2022 and since then, generative AI has taken the world by storm. But few consider how its sudden rise will shape the future of cybersecurity.

In 2024, generative AI is poised to facilitate new types of transnational (and cross-linguistic) cybercrime. For example, much cybercrime is masterminded by underemployed men in countries with underdeveloped technological economies. The fact that English is not the primary language in these countries has thwarted hackers’ ability to defraud those in English-speaking economies; Most native English speakers can quickly identify phishing emails by their unidiomatic and ungrammatical language.

But generative AI will change that. Cybercriminals around the world can now use chatbots like WormGPT to write personalized, well-written phishing emails. By learning from phishers on the web, chatbots can create data-driven scams that are especially convincing and effective.

In 2024, generative AI will also facilitate biometric hacking. Until now, biometric authentication methods (fingerprints, facial recognition, voice recognition) have been difficult (and expensive) to impersonate; It is not easy to fake a fingerprint, a face or a voice.

However, AI has made deepfaking much less expensive. Can’t imitate your target’s voice? Tell a chatbot to do it for you.

And what will happen when hackers start attacking chatbots themselves? Generative AI is just that: generative; it creates things that weren’t there before. The basic scheme gives hackers the opportunity to inject malware into objects generated by chatbots. In 2024, anyone using AI to write code will need to ensure that a hacker has not created or modified the result.

Other bad actors will also begin to take control of chatbots in 2024. A central characteristic of the new wave of generative AI is its “inexplicability.” Algorithms trained using machine learning can return surprising and unpredictable answers to our questions. Although the algorithm was designed by people, we don’t know how it works.

It seems natural, then, that future chatbots will act as oracles attempting to answer difficult ethical and religious questions. In jesus-ai.comFor example, you can ask questions to an artificially intelligent Jesus. Ironically, it’s not hard to imagine that programs like this are created in bad faith. An app called Krishna, for example, has already He advised killing unbelievers. and support India’s ruling party. What stops scammers from demanding tithes or promoting criminal acts? Or, as one chatbot has done, tell users to leave their spouses?

All security tools are dual-use: they can be used to attack or defend, so in 2024 we should expect AI to be used both to attack and defend. Hackers can use AI to trick facial recognition systems, but developers can use AI to make their systems more secure. In fact, machine learning has been used for over a decade to secure digital systems. Before we get too worried about new AI attacks, we should remember that there will also be new AI defenses to match.

Check Also

MWC 2024: 3 wildest technologies that left us speechless| Trending Viral hub

CMM 2024 is on fire this year, showcasing some of the most extraordinary technology we’ve …

Anker Soundcore 2 Portable Bluetooth Speaker $29.99| Trending Viral hub

GET 25% OFF: Get a Anker Soundcore 2 Portable Bluetooth speaker for only $29.99 on …

LELO SIRI 3: LELO just launched a sound-activated vibrator| Trending Viral hub

GET TO KNOW SIRI 3: LELO just added the SIRI 3, a sound-activated vibrator, to …

Leave a Reply

Your email address will not be published. Required fields are marked *