Let's see how we can help you!
Leave a message and our dedicated advisor will contact you.
Send us a message
0/10000
Leave a message and our dedicated advisor will contact you.
Do you remember the days when programming meant juggling pointers, fighting the compiler over a missing semicolon, and flipping through massive volumes of documentation? Although it feels like centuries ago, it has actually been just a few years. The year 2025 brought a change that is a dream come true for some regarding the democratization of technology, but for security engineers, it marks the beginning of long, sleepless nights. We are talking about "Vibe Coding"—a phenomenon that has taken Silicon Valley by storm.
The term, popularized earlier this year by Andrej Karpathy (former Director of AI at Tesla), perfectly captures the spirit of our times. The programmer evolves from a craftsman and author, caring about every detail, into a curator or conductor. You no longer write for loops, you don't define classes, and you don't worry about indentation in Python. Instead, you throw your intentions at an LLM model (like GPT-4, Claude, or DeepSeek), described in loose, natural language. You say: "make me a financial dashboard in React that looks nice," and the artificial intelligence spits out ready-made, working code.
Karpathy even stated that "the hottest new programming language is English." Sounds beautiful, right? The problem is that in Vibe Coding, the user—as the name suggests—simply "gives in to the vibes." This means accepting the code without deep analysis, and often—let's be honest—without reading a single line. What matters is the end result: the application works, buttons react, the boss is happy. But what lies under the hood—architecture, security, and technical debt—is a completely different, often terrifying story.
But let's not be entirely gloomy—Vibe Coding has a bright side too. It is primarily a promise of IT democratization on an unprecedented scale. The barrier to entry into the world of technology is dropping drastically, allowing hobbyists with no technical background to create fully functional applications they could previously only dream of. The paradigm shifts from the imperative "how to do it?" (tedious coding) to the declarative "what do I want to achieve?". Ideally, the idea matters more than knowing the intricacies of a compiler.
For business, this means turbocharging prototyping. Ideas that once required weeks of a development team's work can now be verified in a single afternoon. This unleashes creativity and allows focusing on business value instead of getting bogged down in environment configuration or dependency management. The sense of "flow" and agency in this model is simply addictive—and that is what drives the revolution, despite all its risks.
Vibe coding wouldn't exist without powerful models and environments that, by 2025, have evolved from simple assistants into fully autonomous agents. Today's tools don't just suggest syntax—they manage the entire code lifecycle. Here is the arsenal of the modern "code whisperer":
Unfortunately, enthusiasm is quickly cooled by hard data. Studies from 2024-2025 paint a grim picture of the quality of code generated by AI. It turns out our digital assistants are not brilliant engineers, but rather lightning-fast interns with a slightly embellished CV.
An analysis of over half a million code snippets in Python and Java leaves no illusions: code written by AI is systemically less secure. As many as 62% of samples from LLM models contained security vulnerabilities. Worse still, 60% of them are critical errors (among humans, this percentage is 45%). AI doesn't make small mistakes—when it messes up, it goes all in.
Artificial intelligence is an incurable optimist. It assumes that your application's user will be nice, enter correct data, and never try to break anything. LLM models notoriously ignore defensive programming practices. The effect? Lack of input validation (CWE-20) is a real plague. AI forgets about sanitization, opening the door wide for SQL Injection or OS Command Injection attacks. Even if you ask for "secure code," the safeguards are often superficial, inconsistent, or outdated.
Another common sin is "hardcoding" credentials. Ask AI to connect to a database, and there is a good chance that the login, password, and API key will land straight in the source code instead of secure environment variables. The model "means well"—it wants the code to work immediately, so it cuts corners, serving credentials to potential attackers on a silver platter.
This is one of the most treacherous discoveries. It would seem that "talking" to AI and asking for corrections should improve the code. The reality is different. When we ask the model for optimization, it often forgets the security context (a phenomenon known as Catastrophic Forgetting). After just five iterations like "make it run faster," the number of critical vulnerabilities can rise by nearly 38%. The model, wanting to please us, quietly cuts out validations it deems "unnecessary," treating them as overhead.
If you thought typosquatting (typos in package names, e.g., reqeusts instead of requests) was a problem, meet its more sophisticated and dangerous cousin: slopsquatting.
The term is a neat combination of "slop" (low-quality content generated by AI) and "squatting." The attack mechanism is brilliant in its simplicity and relies on hallucinations of language models.
AI models don't "know" libraries—they only predict which words should follow one another. Statistically, they know that google-cloud- style names often end with -utils or -helper. So, they often make up names of libraries that should exist but actually don't, e.g., google-cloud-storage-helper.
Cybercriminals are just waiting for this. They monitor these hallucinations, register the names invented by AI in public repositories (npm, PyPI), and place malicious code there (e.g., Reverse Shells or Infostealers). The attack scenario is simple:
huggingface-cli library" (an authentic example).pip install huggingface-cli into the terminal.What is the scale of the problem? Studies show that about 20% of packages suggested by open-source models are hallucinations. For giants like GPT-4, it's "only" 3-5%, but with millions of developers, this creates a massive attack surface.
Technology is one thing. However, Vibe Coding wreaks real havoc in our heads and teams.
Code Review Fatigue Seniors, instead of designing architecture, are drowning in a sea of AI-generated code, becoming the bottleneck of every project. With such a huge volume, vigilance drops. "Rubber Stamping" appears—mindless approval of changes because the code "looks good" at a glance, is formatted, and has comments. This is a straight path to technical debt and systems that no one in the company truly understands.
Skill Atrophy in Juniors This is perhaps the saddest aspect of this revolution. Young programmers who rely on AI from the start of their careers fall into a trap. They become tool operators, not engineers. They don't learn debugging; they don't understand the basics. When AI makes a subtle logical error (and it certainly will), such a programmer is helpless. We are losing the ability to think critically and verify. A generational competence gap is forming.
The Productivity Illusion Despite the subjective feeling of "flow," hard data is ruthless. Reports (e.g., the METR 2025 study) show that programmers using AI often need more time to complete tasks—even up to 19% more! Time saved on writing simple code is wasted with interest on tedious debugging and fixing AI hallucinations.
Another threat looms on the horizon: Autonomous AI Agents (like Devin or GitHub Copilot Workspace). These are no longer just chatbots. They are programs that have agency—they can execute commands, manage files, and even deploy applications to production.
This opens the way for RCE (Remote Code Execution) attacks via Prompt Injection. Imagine an attacker placing a malicious instruction in the text of a GitHub issue. The AI agent tasked with "fixing the bug" reads this text and executes the command hidden within it, thinking it's part of the task. In this way, a hacker can take control of the development environment. Then there are vulnerabilities in the AI frameworks themselves. Researchers discovered the ShadowMQ vulnerability—errors in the default configuration of libraries like Ray or PyTorch, which allow unauthorized takeover of GPU clusters.
AI in programming is not a passing fad—it's here to stay. The point is not to become a Luddite, but to trade naive enthusiasm for a principle of limited trust.
npm install.Vibe Coding is a powerful tool, but in unaware hands, it resembles a pulled grenade pin. Remember: a good vibe is great at a concert, but in source code, it's better to rely on cold analysis and a healthy dose of paranoia.
Aleksander
Sources:

Chief Technology Officer at SecurHub.pl
PhD candidate in neuroscience. Psychologist and IT expert specializing in cybersecurity.
Can artificial intelligence experience trauma? We explore the fascinating and disturbing results of an experiment where algorithms underwent therapy sessions. The result? Synthetic psychopathology.
An analysis of 100 trillion tokens debunks myths about artificial intelligence. It turns out that instead of office productivity, roleplay reigns supreme, and users bond with models like in the Cinderella fairy tale.
Forget simple tunneling. In a world where AI reads packets like a book and network switches correlate attacks themselves, privacy demands a paradigm shift.
Loading comments...