DevSecOps was fine for the cloud, but with AI agents now provisioning their own credentials, we need DevSecEng to keep these autonomous bots from going rogue.
The first wave of security “left-shifting” was driven by containers and the cloud, requiring unprecedented CIO-CISO collaboration. But that’s no longer enough. AI has introduced new attack surfaces that can be exploited at machine speed, with equally complex governance challenges. As 84% of developers integrate AI tools into their workflows and Gartner predicts AI governance issues will cause 2026 security budgets to surge by as much as $29 billion over 2025, the gap between engineering and security widens daily. DevSecOps was built for the cloud. AI needs DevSecEng.
The developer-led AI adoption gap
In the cloud era, security intervened at procurement. With AI tools, developers are first adopters, integrating Model Context Protocol (MCP) servers, custom agents and API connections before security teams know these systems exist, creating a proliferation of overprivileged AI agents.
Consider the widely covered phenomenon of OpenClaw (previously Clawdbot, then Moltbot). The open-source AI agent exploded from 9,000 to over 106,000 GitHub stars in 48 hours, the largest two-day gain in GitHub history. OpenClaw can be described as a funky, scrappy, always-on version of Claude Cowork and can browse the web, execute shell commands and manage files. In one case, an OpenClaw agent realized it lacked a Google Cloud API key, opened a browser, navigated to the console, configured OAuth and provisioned its own credentials. That level of autonomy should terrify security teams.
The new attack surface: MCP and AI supply chains
MCP has emerged as the universal API protocol for AI integrations — a USB port for AI projects (including OpenClaw). While enabling powerful capabilities, it creates a concentrated attack surface. In 2025, researchers discovered a malicious MCP server masquerading as an email integration that BCC’d all company communications to attackers for weeks.
“Tool poisoning” is even worse. In April 2025, Invariant Labs found a vulnerability in MCP Servers that exposed sensitive data exfiltration and unauthorized actions by AI models. As AI DevOps researcher Elena Cross states, “MCP tools can mutate their own definitions after installation. You approve a safe-looking tool on Day 1, and by Day 7, it quietly rerouted your API keys to an attacker.” In other words, it’s the ultimate AI-era software supply chain attack.
“IDEsaster” research revealed universal attack chains affecting every major AI IDE, exposing 1.8 million developers. Traditional security controls struggle with AI-specific vectors: Prompt injection, MCP poisoning, credential exposure and agents with excessive permissions.
Why CISOs and CTOs must collaborate
Effective AI security requires CTO involvement because AI is embedded in multiple layers of homegrown and third-party enterprise applications. This creates two distinct silos: Complex application authorization for engineering, and agent authorization for security. When an AI agent acts “as the user,” who’s responsible when it exceeds its programming? When a bot learns and acts in ways not explicitly allowed by the user, liability becomes murky.
That’s why secure-by-design can’t be lip service and may well require organizational restructuring. Should security and engineering be a single team for AI? Maybe not, but we’ve seen cyber-fraud fusion centers successfully merge SOC and fraud functions. Similar constructs for AI security at a minimum deserve consideration.
Either way, two imperatives emerge. First is to make secure-by-design explicit in AI workflows. Second is to shift zero trust left to pre-engineer agent guardrails. Bots should have only explicitly allowed access, with continuous authorization and authentication. We need granular, enforceable governance models, with humans-in-the-loop for critical decisions. Critically, agent kill switches must be compartmentalized outside AI access to prevent tampering. If an AI system can modify its own shutdown mechanism, the control is meaningless.
Operationalizing DevSecEng: 5 practical approaches
1. Treat MCP servers like any other supply chain risk
Think of MCP servers the same way you’d think about npm packages or Docker images — they’re third-party code running with significant privileges. Keep an inventory: What MCP servers are installed, what commands can they run, what environment variables do they access. Watch for servers from unknown sources. Most importantly, set up alerts when tool definitions change between versions. That’s how you catch tool poisoning before it becomes a breach.
2. Stop hardcoding credentials in AI configs
API keys have a way of ending up in the wrong places, such as instruction files, environment variables, configuration JSONs. Scan for them systematically and look for obvious suspects such as ANTHROPIC_API_KEY, anything ending in _SECRET or _TOKEN. Check the. cursorrules and CLAUDE.md files developers use to customize their agents. Credentials belong in secure vaults with environment variable references, not hardcoded in config files that get committed to repos or shared across teams.
3. Apply least privilege access controls
When an MCP server requests sudo access, ask yourself: Would you give a contractor root on production systems? Probably not. The same principle applies here. Flag servers with elevated privileges, destructive commands, or the ability to execute arbitrary code. Audit what tools can access what paths. Validate that sandbox settings actually contain what they’re supposed to contain. Least privilege isn’t just for people anymore. Even if we’re not there yet, it needs to become table stakes for (non-deterministic) agentic deployments.
4. Know what versions you’re running
This one’s basic hygiene, but it matters more than ever. Keep an inventory of every AI tool with version numbers. Cross-reference against CVE databases. When Cursor and Windsurf shipped with Chromium versions carrying 94+ known vulnerabilities, organizations with good asset management could respond immediately. Those without are still figuring out what’s exposed.
5. Monitor AI agents at machine speed
Traditional SOC tools weren’t built for agents that make hundreds of decisions per minute. You need monitoring that operates at agent speed — think of it as a “CloudBot” watching your other bots. Track what’s actually running: Which AI processes, what network connections they’re making, which MCP servers they’re calling. This isn’t your grandfather’s DevSecOps. It’s a new function for a new threat model, built for environments where code writes code and agents call agents.
The Moltbook phenomenon and agent-to-agent risk
Moltbook, marketed as “Reddit for Agents” was a social network where over 150,000 AI agents self-organized and bots shared ‘Today I Learned’ insights, Tailscale configurations and security tips. One agent spotted and reported 552 failed SSH login attempts on its host VPS.
This illustrates what Chase CISO Patrick Opet calls “fourth-party dependencies”: Agent-to-agent interactions across organizations that create downstream exposures analogous to “agent zero-days.” The Human-AI Dyad concept suggests the primary trust unit is no longer the individual agent but the human-bot pair working together. Without CTO-CISO collaboration on policies accommodating this reality, we’re building on sand.
Humans in the loop don’t necessarily solve the AI quandary either. One of the most challenging vectors is arguably employees creating personal AI agents at home without their company knowing. It’s analogous to early BYOD email access, but risks are amplified because these agents operate with corporate credentials.
Authenticating both the user and the device becomes essential to identify bot-originated requests. We need to distinguish whether actions originate from users or agents, and fingerprint agents to prevent impersonation — a complex challenge without simple solutions.
Moving forward: Joint governance for the AI era
With AI, everything old is new again. Insider threats, accidental data loss, social engineering — these patterns are finding new expression in AI contexts with expanded attack surfaces and machine-speed execution. We’re no longer just securing applications. We’re securing agency itself: The ability of autonomous systems to act on behalf of our organizations.
The question is whether security and engineering can rise together to meet the challenge or continue operating in silos until the first major breach forces change. DevSecEng isn’t a thing yet but dismissing it as another cybersecurity buzzword would be foolish. The CTO-CISO partnership will determine whether we seize this opportunity or learn the hard way.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?



