Artificial intelligence: Cybercrime threats on the rise! Discover the shocking state of affairs and the alarming prospects

Artificial intelligence: Cybercrime threats on the rise! Discover the shocking state of affairs and the alarming prospects

Cybercriminals are leveraging artificial intelligence for their attacks

Cybercriminals have found a new tool to carry out their attacks more effectively and credibly: artificial intelligence (AI). The use of generative AI, popularized by chatbot ChatGPT, is spreading across the world of cybercrime, allowing criminals to update their tools and make their attacks more sophisticated.

AI makes cybercriminals more efficient and credible

The democratization of AI among cybercriminals makes them more effective and credible. The methods used by these criminals remain the same, but the volume and persuasiveness of the attacks increases significantly. For example, phishing emails are becoming more sophisticated, avoiding gross syntax or spelling errors. Scammers adapt their language and use appropriate contexts to convince people to click on questionable links or sites.

Generative AI to create personalized malware

Generative AI can also be misused to create custom malware, exploiting known vulnerabilities in computer programs. Programs such as ThreatGPT, WormGPT, and FraudGPT are growing on the Darknet and gaining popularity among malicious actors. Hackers also use AI to sort and exploit masses of data after infiltrating a computer system, allowing them to maximize their profits by targeting the most relevant information.

AI in the presidential scam and ransomware

AI is also being used in the presidential scam, where hackers collect information on company executives to authorize fraudulent transfers. Thanks to “deepfake” audio generators, they can perfectly imitate the voices of managers to give transfer orders. Additionally, ransomware already uses AI to modify its code and evade detection by security tools. The technique of vishing, where a fake banker requests a money transfer, could also be improved using AI.

Synthetic content generated by AI to deceive, harass or extort

British police have previously reported cases where synthetic AI-generated content has been used to deceive, harass or extort victims. Although the first cases in France have not been officially recorded, doubts remain about the use of AI by criminals.

The need for the “zero trust” rule

Faced with these new threats, it is essential to apply the rule of “zero trust” in matters of cybersecurity and AI. We must not trust any a priori element and put in place appropriate protection measures to counter these attacks. The most active hackers are generally well-organized networks from Eastern Europe, but state hackers from marginalized countries should not be overlooked.

Conclusion

In conclusion, the exploitation of AI by cybercriminals represents a growing threat. They use this technology to improve their techniques and carry out more credible attacks. It is therefore essential to remain vigilant and take adequate protective measures to counter these new forms of cybercrime.