AI coding assistants have unleashed new superpowers for developers, with the likes of GitHub Copilot, Cursor’s AI code editor, and Claude Code among the new favorites in their toolkit. They can generate functions, create APIs, and even produce test cases in seconds. Work that once took hours now takes moments. And for development teams always looking for more ways to do more with less, this is the ultimate superpower.
But there is another side to this revolution underway that the industry isn’t talking about enough. While AI coding assistants are making developers more nimble and efficient with the code they generate, they’re also increasing the attack threat surface of the applications they develop.
It’s a juxtaposition that many enterprises have yet to fully comprehend, much less address. At what cost should they sacrifice speed, efficiency, and productivity over what’s an even bigger issue: exposure?
The Misconception: AI “Knows What It’s Doing”
One of the greatest misconceptions the developer community has when it comes to AI coding assistants is the idea that the tool knows what it’s doing.
Unfortunately, it does not.
It generates code based on patterns, but it can only consider so much context. Developers still need to guide how features should ultimately be implemented. It does not know an organization’s architecture, or regulatory requirements, nor does it know if the dependency it’s injecting violates an organization’s internal policies, or whether the service integration it’s injecting creates a new threat vector.
AI coding assistants can produce believable code, and newer models are increasingly capable of generating code that follows secure patterns. But they still don’t understand an organization’s architecture, security posture, or internal policies. Even well-written code can expand the attack surface by introducing new dependencies, integrations, or access paths that create additional risk.
Ultimately, it’s the responsibility of the developer to ensure that whatever code goes to production is sound on all fronts and ready for primetime.
When “Move Fast and Break Things” Meets AI
Software development has been optimized for speed for several years now. Developers have become comfortable with shipping quickly, testing in production, and then rolling back when something goes wrong.
While this works reasonably well when developers are intimately familiar with the systems they are changing, this is very dangerous when a significant portion of that system is actually generated by an AI assistant. Developers may be using AI-generated code without realizing all of the implications of that code.
As a result, developers may be adding dependencies, integrating with other systems, or making architectural changes to their codebase that quietly alter the security profile of their application.
The feature works. The test passes. The deployment succeeds. But the attack surface has been modified. And no one is any wiser.
The Real Problem: Attack Surface Sprawl
The biggest security risk of AI coding assistants is not bad code. It’s sprawl. AI-assisted code generation significantly increases the amount of code developers create. In fact, Apiiro research found that AI-assisted teams not only shipped 10× more security findings, but also AI-assisted developers produced 3 – 4× more commits than their non-AI peers.
More features. More APIs. More integrations. More third-party components.
Each of these individually may be innocuous. But collectively, they’re increasing the size of our attack surface in ways organizations may not be prepared to handle. Security tools and governance practices were not built to accommodate this degree of acceleration. Testing pipelines, runtime protection, and security reviews are typically designed to accommodate a traditional development cadence. What happens when development velocity increases tenfold?
The situation begins to resemble a cartoon character running with only one leg. One leg is running at an incredibly fast pace, while the rest of the body is not. Eventually, the system begins running in circles.
A coding process for AI systems also has this potential to create a problem. The coding process increases at a dramatically fast pace, while the security systems move at their normal pace. This situation eventually creates security gaps.
Guardrails Must Move into the Development Workflow
This doesn’t mean developers should stop using AI coding assistants.
However, security systems and governance processes must evolve along with the development process. In many organizations, the bottleneck has simply shifted from the developer to the guardrails that review and secure the code being produced.
We are already seeing this play out in the industry. After a series of outages linked to AI-assisted code changes, Amazon reportedly tightened its review process and increased human oversight of code deployments. In practice, this means bringing experienced engineers back into the loop to review AI-generated code and ensure it aligns with architectural context and operational requirements.
This shift highlights an important reality: AI can accelerate development dramatically, but judgment, context, and accountability still need to come from humans.
Rather than relying on security systems at the end of the pipeline, organizations need systems that incorporate security guardrails much earlier and at scale, helping senior developers in the coding process.
While working with AI systems, developers should not be expected to manually recall every policy, constraint, and organizational requirement. Governance cannot rely on someone’s memory or internal connections to enforce those rules. Instead, those guardrails need to be embedded directly into the development workflow so AI tools can operate within the organization’s security and architectural boundaries.
When AI systems have all the information they need at their disposal, they are more likely to generate code that meets organizational requirements without compromising security.
The Developer Remains the Human in the Loop
Despite all the excitement over AI-generated code, the developer still has to be the final check on all things code.
Experienced engineers understand this and know that AI assistants are tools and not replacements for architectural thinking. They validate what the AI produces. They analyze the dependencies. They understand the system they are building and how it interacts with the world.
AI can create code fast. But it can’t replace judgment. Organizations will succeed with AI-assisted development when they understand these tools as accelerators, not as decision-makers.
The Security Reckoning Is Coming
When there’s a new advancement in technology, it’s usually accompanied by a series of security breaches and weaknesses for early adopters of the technology.
In the software world, we have seen this with open-source libraries, containerization, and cloud infrastructure. Each of these has revolutionized the way developers create and deliver applications, with each of these offering significant increases in development speed and agility. But they’ve also been accompanied by breaches and weaknesses that took many years to adapt to.
If organizations wait until breaches and weaknesses start appearing in the world, they will find themselves trying to bolt security into a process that has never been designed to support it.
The way to avoid this is to prepare now.
The question is, will the industry make its coding process bulletproof before the consequences of not doing so are impossible to ignore?
About the Author
is the VP of Product at Apiiro. With over two decades of experience in tech, Karen transitioned from leading teams in the 8200 Technology Unit to scaling startups. As a seasoned developer, engineering manager, and product management leader, she has driven successful launches in high-growth companies as well as product privacy initiatives alongside legal teams. At Apiiro, Karen thrives on crafting user-centric solutions in the ever-evolving application and software supply chain security landscape.
Karen can be reached online on LinkedIn and at our company website https://apiiro.com/.
