This week I've been experimenting with Microsoft Entra Agent ID, and how we can give AI agents identity capabilities to ensure they have proper governance, security, and accountability built in. I've written a couple of blog posts that go into the core concepts of Entra Agent ID, and how we can use it within our agentic applications. Understanding Microsoft Entra Agent ID: https://lnkd.in/gGA6P35g Creating Entra Agent ID Blueprints and Identities with PowerShell and .NET: https://lnkd.in/gGjMpC2F How to Call Azure Services from an AI Agent Using Entra Agent ID and the .NET Azure SDK: https://lnkd.in/gkj-XqE6 They are pretty meaty, but I hope they help you understand the importance of implementing identity capabilities in your agents. Any questions, please reach out.
Microsoft Entra Agent ID: Governance and Security for AI Agents
More Relevant Posts
-
We were connecting to Azure OpenAI with API keys. It worked. But it was not right approach for production grade system. As an architect, I know the difference between something that functions and something that's built correctly. API keys fall into the first category. They are static, they don't expire and if your API key gets stolen , you are screwed up. For internal tooling or a quick POC, this approach is fine. But the moment you are in an enterprise environment with real data flowing through your GPT models, that approach doesn't hold up. So we moved to certificate-based authentication for Azure OpenAI. Here's the flow: 1. Generate your cert pair Private key stays on your server. Public cert gets uploaded to your Azure App Registration. The private key never moves. 2. Sign a JWT client assertion at runtime Your app builds a short-lived JWT — signed with the private key using RS256. This proves identity without ever transmitting the key itself. 3. Exchange it for a Bearer token Microsoft Entra validates the signature against the public cert and hands back an access token. Valid for ~1 hour. Automatically. 4. Call Azure OpenAI Standard Authorization: Bearer header. Clean. Scoped. Auditable. What this gives you architecturally: → No secrets in code, config, or pipelines → Tokens expire on their own — no manual rotation → Azure RBAC scoped per application, not blanket access → Every token issuance logged in Entra ID → Plug in Azure Key Vault — zero hardcoded credentials anywhere The security posture improvement is significant. But honestly, what I appreciate more is the architectural cleanliness. Each service has its own identity. Access is scoped. The audit trail is there by default. That's how production systems should be built. If you're architecting AI systems on Azure and still passing API keys around — this is worth the one-time setup investment. Glad to walk through the implementation if anyone's working on this. #AzureOpenAI #SolutionsArchitecture #EnterpriseAI #CloudSecurity #MicrosoftEntra #AIEngineering #TechLeadership
To view or add a comment, sign in
-
With the help of this agent we can conversate with Azure virtual network components in normal English. Agents will identify issues related to vnet and can fix them automatically. Building AI Agent for Virtual Network A step-by-step guide on how to create an AI Agent that will interact with Azure Virtual Network using LangChain, Gemini AI and Powershell. 👇 #AI #AIAgent #vnet #virtualnetwork #azure https://lnkd.in/gXEzKaE6
To view or add a comment, sign in
-
Microsoft Copilot Agent Maker: The Hidden Security Implications of AI Agent Deployment in Enterprise Clouds + Video Introduction: The convergence of Artificial Intelligence and cloud security has reached a critical inflection point with the emergence of platforms like Microsoft's Copilot Agent Maker and the Agent Launchpad. While these tools promise unprecedented productivity gains by enabling users to create custom AI agents, they also introduce a new attack surface that security professionals must immediately address. This article analyzes Aviv Weissman's recent Microsoft Digital credential achievement, dissects the underlying architecture, and provides a comprehensive technical guide to securing these AI agent deployments against exploitation, misconfiguration, and data leakage....
To view or add a comment, sign in
-
Security researchers found a way to break out of the security sandbox protecting AWS Bedrock's AI coding tool — by sending hidden commands through DNS lookups, the system computers use to translate website names into addresses. The sandbox was supposed to block all outbound communication, but it still allowed DNS queries, creating a covert channel that could be used to steal data or run commands remotely. AWS has since fixed the flaw, but the broader issue remains: as companies rush to deploy AI agents that write and execute code automatically, traditional security boundaries are proving insufficient. If you use AI coding assistants or cloud development tools, avoid pasting sensitive data like API keys or passwords into code prompts until providers clarify sandbox protections. 💥 #CyberNewsLive https://lnkd.in/es9f_rmz
To view or add a comment, sign in
-
This is an introductory course to Microsoft Azure AI which not only teaches about Azure but acts like a fast-paced course for revising major concepts related to ML and prerequisites for DL and then understanding how easily it can be done by using Azure while maintaining stability, security and admin access. I would definitely recommend it for everyone looking to upskill or revise the theory of ML and AI framework.
To view or add a comment, sign in
-
Zero Trust for AI Agents: Why Your Production LLMs Need Identity and Access Control Yesterday + Video Introduction: The conversation around AI agents has shifted from experimental demos to the harsh realities of production deployment. As highlighted in a recent LinkedIn discussion by David Matousek and Tarak ☁️, the core challenge is no longer model capability but architectural security. When an agent moves from reading data to deploying code, modifying cloud configurations, or triggering CI/CD pipelines, it ceases to be a simple script and becomes a high-risk workload identity....
To view or add a comment, sign in
-
https://lnkd.in/dZKrwUmW AI coding agents need API keys to function. They call LLM providers, interact with cloud services, and authenticate against private APIs. The conventional approach is to pass these credentials as environment variables -- OPENAI_API_KEY, ANTHROPIC_API_KEY, and so on. The agent reads them, attaches them to HTTP requests, and everything works. The problem is that the agent now possesses your credentials. A prompt injection attack can trick the agent into printing its environment variables, posting them to an attacker-controlled endpoint, or embedding them in generated code. The credential is sitting in process memory and in /proc/PID/environ on Linux, readable by any same-user process. Once leaked, the blast radius is the full scope of that API key. #ai #security #apikey
To view or add a comment, sign in
-
Tired of hype and want something real in your feed? Context: We built an AI platform that started as a 3-day prototype, became an MVP in a week, and quickly moved into production usage. Hardening was being added progressively. Backups were in place. However, we started competing with new features, and the resource group lock and Terraform were next on the list. The real tax of deprioritised tasks for non-critical envs. The incident happened before it landed. Today, a production (not critical) resource group in Azure was accidentally triggered for deletion. Friday the 13th. Coincidence? Inside it: GraphDB Neo4j, ACR, Container Apps with our MCPs and AI Agents, networking, monitoring....Everything started disappearing in a cascade. We opened a ticket with Microsoft. But we couldn't wait. What happened in parallel? 1. #ClaudeCode analysed the subscription and found the backup vault still responding, even with the RG in "Deleting" state 2. New resource group created, Neo4j VM restored from backup. 3. Claude Code managed to generate the JSON exports of the old infrastructure. Read them all, reconstructed the topology, and generated a prioritised recovery plan with az CLI commands 4. 31 resources rebuilt: VM, disks, backup vaults, ACR, Container App Environments, VNets, NSGs, Log Analytics, Data Collection Rules, public IPs, DNS, Service Principal roles, and CI/CD pipeline 5. Infra team redirected DNS. After propagation...everything UP Client impact: <1h downtime. They didn't notice. Big Win for us... and thanks, Anthropic... Btw, when Microsoft jumped in (~15 min, impressive), we were already at 80% recovery. When their full team joined the call, 2 of 3 container apps were already live. Azure Backup saved us, but only because we restored before the vault disappeared. I have to be honest: it was only because Claude did it. Now, we asked the admin to add CanNotDelete locks. The bigger point is less about Azure or Microsoft, and is more about real AI: #AI dramatically compresses incident response time. Not because it replaces expertise (I'm not an Azure specialist who knows all commands by heart). But it shortens the gap between "something just broke" and "here's how to fix it" or "here are your options". A year ago, this would have required multiple specialists across infrastructure, networking, ACRs, backup, and pipelines (which was how the MS team began joining the incident). If it's still not clear... The knowledge is now accessible in real time. What matters most is judgment, speed, and ownership of the outcome. And yes, good judgment is built on a solid foundation. But the era of "I only know tool X," "the programming language Y," or "my role is ABC" is over. And this is not hype, it's real. Companies that invest in problem solvers, the ones who own the outcome, adapt, and learn faster, will win.
To view or add a comment, sign in
-
⚡ A quick security reminder before experimenting with AI coding agents. I came across an interesting security issue reported recently in the NanoClaw repository. The issue describes a scenario where an AI coding assistant executes CLI commands that read values from a `.env` file and prints the credentials directly to the terminal output. Example: grep '^AWS_ACCESS_KEY_ID=' .env | cut -d= -f2 This means the key value could end up visible in: • terminal logs • CI pipelines • screen sharing sessions • remote terminal recordings In the reported case, a live AWS access key was exposed and had to be rotated immediately. This highlights an important lesson as AI development tools evolve. AI agents interacting with terminals or codebases must be designed to **never expose sensitive values such as API tokens, secret keys, or credentials**. Developers experimenting with these tools should also take precautions: • avoid storing real credentials in `.env` files used during testing • use temporary or sandbox keys • rotate credentials frequently • Review terminal output carefully The AI coding ecosystem is moving very fast right now, and projects like NanoClaw are still evolving. Security guardrails will be a critical part of making these tools safe for enterprise environments. Worthwhile to keep in mind before connecting experimental AI agents to real cloud credentials. Link to the issue: https://lnkd.in/ebTdujpa #AIsecurity #DevSecOps #GenerativeAI #CloudSecurity #AWS
To view or add a comment, sign in
-
How does Microsoft Security Copilot respond so quickly and accurately? 🧠 It's more than just AI — it's a system of orchestrators, plugins, and contextual intelligence that drives real-time action across identities, endpoints, and cloud apps. Check out this infographic from Microsoft Security to see how Copilot works.
To view or add a comment, sign in
Explore related topics
- How to Understand AI Agent Capabilities
- How Developers can Use AI Agents
- Tips to Secure Agentic AI Systems
- How to Use Agentic AI for Better Reasoning
- How to Use Identity Management for AI Security
- How to Use AI Agents in Legal Workflows
- How to Use AI Agents for Business Value Creation
- How to Use AI Agents in Model-Centric Workflows
- How to Use AI Agents to Improve SaaS Business Models
- How to Empower Your Business With AI Agents