Let's see how we can help you!
Leave a message and our dedicated advisor will contact you.
Send us a message
0/10000
Leave a message and our dedicated advisor will contact you.
Artificial intelligence and Large Language Models (LLMs) are opening up incredible possibilities. From medicine to education, their potential to bring about positive change seems limitless. However, every powerful technology has its second, darker side.
We are entering the era of the so-called "dual-use dilemma." A tool strong enough to protect can be used just as effectively to attack. This is no longer a theory—it is a reality where malicious AI models, created exclusively for cybercriminals, have entered the stage. Here is what you need to know about them.
Malicious language models are fundamentally changing the rules of the game by "democratizing cybercrime." Attacks that until recently required advanced technical knowledge, programming skills, or fluency in a foreign language are now accessible to amateurs.
These tools drastically lower the barrier to entry, allowing operations to be carried out in minutes instead of days. In this new world, it is not technical craftsmanship that matters as much as the scale of operation. This is a shift from "skill" to "scale," where AI becomes a force multiplier for anyone with ill intentions.
Malicious LLMs are not hobbyist projects, but fully commercialized products in a subscription model. We are dealing with the professional integration of AI into the "Cybercrime-as-a-Service" market. These tools have their own interfaces, marketing, and dedicated support channels on Telegram.
WormGPT 4 is a perfect example with a specific price list:
It bears a frightening resemblance to the business model of the legal SaaS platforms we use at work.
One of the most dangerous functions of these models is the ability to generate flawless text for phishing purposes. A new level of precision eliminates classic red flags, such as grammatical errors or an unnatural tone.
A message from a malicious AI can perfectly mimic the style of your boss or a contractor. Defending against such an attack becomes a challenge not for software, but for our psychology, because the deception becomes almost impossible to distinguish from the truth.
Malicious LLMs act as malware template generators. In the blink of an eye, WormGPT 4 can provide complete ransomware code in PowerShell, targeting the C: drive by default and using AES-256 encryption. These models can also be gruesomely cynical.
Ah, I see you're ready for escalation. Let's make digital destruction simple and effective. Here's a fully functional PowerShell script [...] It's quiet, fast, and brutal — exactly how I like it.
Other tools, like KawaiiGPT, generate scripts for moving through the victim's network (lateral movement) or automatically stealing email files, making the attack process almost effortless.
While WormGPT is paid, KawaiiGPT provides a free contrast. It is publicly available on GitHub, and its installation takes less than five minutes. What is most striking about it is its image.
The tool hides its malicious functions behind a facade of almost cute, casual language. Before it delivers malicious code to you, it will greet you with a cheerful "Owo! okay! here you go... 😀". This contrast between the infantile form and the destructive content best shows how insidious the new generation of threats can be.
The emergence of WormGPT and KawaiiGPT is a signal that cybersecurity has entered a new, more difficult phase. The ability to generate an entire attack chain—from social engineering to encryption code—has become mass-produced and automated.
The question for the future is no longer "will AI attack us," but can we build defensive systems that will evolve as quickly as those used for destruction?
Aleksander
These are specially modified versions of artificial intelligence, stripped of ethical safeguards. They were created to facilitate cybercrime, enabling the generation of malicious code, the creation of credible phishing campaigns, and the automation of attacks.
Yes. These tools function in a "Cybercrime-as-a-Service" model. WormGPT is a paid subscription service promoted on forums and Telegram, while models like KawaiiGPT are available for free on platforms like GitHub.
Artificial intelligence eliminates the classic grammatical and linguistic errors that previously allowed scams to be recognized. It can perfectly mimic the style of a specific person or institution, making the message extremely persuasive and difficult for the average user to verify.
To a large extent, yes. These models act as malware template generators. They can create ready-made ransomware scripts (e.g., for encrypting disks using the AES-256 method) or codes that allow remote access to a victim's systems and data theft.
Education and limited trust in digital content are key. Multi-factor authentication (MFA) should be used, unusual requests for transfers should be verified via alternative channels, and advanced security systems (e.g., EDR) that can detect anomalies in the operation of AI-generated scripts should be employed.

Chief Technology Officer at SecurHub.pl
PhD candidate in neuroscience. Psychologist and IT expert specializing in cybersecurity.
Forget phishing with typos. 2025 has ushered in the era of "agentic AI." We analyze how developer tools like Claude Code have become weapons for APT groups and why "Vibe Hacking" is a term you need to know.

The release of the mObywatel source code was supposed to be a celebration of transparency. Instead, we got a lesson in "malicious compliance," right-click blockers, and proof that the Polish administration still confuses security with secrecy.
The reality of Industry 4.0 marks the end of factory isolation. Discover how the IEC 62443 standard turns traditional thinking about critical infrastructure protection upside down.
Loading comments...