Artificial intelligence (AI) has already permeated numerous areas of our lives, even if this isn’t always visible from a user’s perspective. Both in private and business contexts, the use and importance of AI-driven systems are increasing rapidly, with their potential applications far from fully explored. What is already clear today: AI, in its current form, represents a quantum leap that eclipses everything that came before.
From a cybersecurity standpoint, the picture is highly complex. On one hand, AI is poised to fundamentally revolutionize both preventive and reactive cybersecurity, including specialized fields such as eDiscovery, IT forensics, and cyber incident response. On the other hand, AI also brings entirely new risks – not least because AI systems themselves become attractive targets with novel attack vectors.
In the hands of criminals, AI is already a powerful offensive tool that is reshaping the threat landscape. These changes extend beyond cybersecurity and affect overall corporate security, in some cases even far beyond that.
This article takes a closer look at the dark side of AI, specifically focusing on how cybercriminals are currently exploiting — or will soon be able to exploit — AI-based technologies to commit crimes.
AI as a weapon
AI systems provide criminals with completely new tools for carrying out offenses. With emerging approaches such as Agentic AI and advanced reasoning — let alone techniques still under development – the problem will intensify significantly. The result is not only the simplification of effective cyberattacks but also the creation of entirely new criminal strategies and tactics.
Vendors of popular AI systems are attempting to implement safeguards, but with only moderate success so far. Generative AI is a classic dual-use technology, and it is often difficult, if not impossible, to infer malicious intent from a prompt alone. Moreover, many AI components –from training datasets to software and complete models – are freely available as open source, giving attackers unrestricted access. Any protective mechanisms against misuse can usually be bypassed with ease.
Social engineering
Generative AI (GenAI) can produce highly authentic, well-written texts in numerous languages for virtually any scenario. The context can be precisely defined, including target audience, industry, and desired outcome. Even the fictional author can be detailed down to qualifications, experience, age, and personality.
As a result, fraudulent texts can be tailored to specific professions, industries, or business types with ease and high quality. A manufacturing engineer speaks differently from a fund manager, and a logistics company uses different metaphors than a medical laboratory. Advanced generative AI systems can convincingly simulate these nuances. Matching fake images and video content can also be generated.
Examples include:
- phishing emails,
- fake profiles on job portals, career sites, and social networks,
- fake business correspondence,
- fake company websites and corporate profiles,
- fake product pages, project descriptions, flyers, or presentations.
At the same time, generative AI can be used to quickly and efficiently research specific professions, roles, industries, and even individual companies. Modern chatbots can even coach their users to convincingly assume a particular role.
These capabilities make generative AI an ideal tool for nearly all aspects of social engineering. While such activities were possible without AI, perpetrators previously needed substantial knowledge and experience to produce convincing results. The vast training datasets behind modern large language models now render much of this expertise unnecessary.
Deepfakes – manipulating image, audio, and video content
Another example of criminal use of generative AI is deepfakes. The ability to generate and manipulate images, videos, and audio is already impressive, and this field of AI continues to evolve rapidly.
So far, there have been only a handful of high-profile cases where deepfakes played a critical role in cyberattacks or corporate fraud attempts. A so-called “fake president” attack using voice cloning has already occurred, but such incidents remain rare – for now.
Given the significant amounts of money at stake in goods and financial fraud, combined with increasingly automated attack methods, both the frequency and quality of deepfake-enabled scams are expected to rise sharply. This trend is already visible in the consumer space, where AI is being used at scale in investment scams, extortion, and smear campaigns targeting individuals and organizations.
AI in vulnerability analysis
There are also documented cases where AI has been used successfully to identify vulnerabilities. Current approaches – though not all fully available yet – include:
- detecting vulnerabilities in software through source code or binary code analysis;
- mapping software interfaces and dependencies to identify potentially exploitable components (supply chain attacks);
- scanning source code repositories for sensitive data such as credentials or API keys;
- analyzing binaries for hard-coded credentials;
- identifying security flaws and data leaks on websites;
- examining graphical user interfaces for potential weaknesses.
AI in malware development
AI-driven software development is already a reality, though media reports sometimes exaggerate its current capabilities. In practice, AI is already a powerful tool for creating small applications or specific functions. Development processes that once took hours or days can often be reduced to minutes. Unfortunately, this capability can be leveraged not only for legitimate applications but also for creating malicious code.
A key feature of AI-assisted development is that the human developer no longer needs deep expertise in the underlying technologies – whether programming languages, algorithms, or APIs for specific target systems.
Conclusion
AI in the hands of cybercriminals is a powerful and not yet fully understood tool. This article has only highlighted a small selection of currently available or soon-to-emerge criminal use cases. The potential goes much further. Developments such as Agentic AI, advanced reasoning, and highly efficient, locally executable models continue to expand the possibilities – also for malicious purposes.
This technological progress enables and accelerates profound changes in the strategies and tactics of criminal organizations and individuals. To meet these challenges effectively and efficiently, companies must fundamentally rethink their own strategies and tactics.
- Adaptability: Modern security strategies must not only adapt continuously but proactively anticipate change. Relying on statistical extrapolation to prepare for future threats was problematic in the past and is becoming increasingly dangerous today.
- Reevaluation of security tools: Traditional cybersecurity tools must be regularly assessed for both current and future effectiveness. The ease with which criminals can now generate perfectly crafted phishing emails must be considered in awareness campaigns and will fundamentally alter phishing defense strategies.
- Integration into business processes: Detecting fake content and the fraud attempts behind it is becoming harder for both humans and machines. It is therefore critical to design business processes themselves to be resilient to such scams and to embed protective measures directly into those processes.
Given today’s complex threat landscape and rapidly accelerating technological change, companies can no longer rely on long-standing strategies, tools, and concepts. By contrast, those that respond proactively – constantly adapting their security strategies, tactics, and technologies to both current and emerging threats – will gain a significant competitive advantage.

