𝗝𝘂𝘀𝘁 𝗹𝗮𝘂𝗻𝗰𝗵𝗲𝗱: 𝗣𝘄𝗻𝘇𝘇𝗔𝗜 - 𝗮 𝗻𝗲𝘄 𝗵𝗮𝗰𝗸𝗶𝗻𝗴 𝗹𝗮𝗯 𝗳𝗼𝗿 𝗵𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝘁𝗲𝘀𝘁𝗶𝗻𝗴. After debuting at our workshop during the SANS AI Summit in Washington, D.C., 𝗶𝘁’𝘀 𝗻𝗼𝘄 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗹𝗲 𝘁𝗼 𝗲𝘃𝗲𝗿𝘆𝗼𝗻𝗲. 𝗣𝘄𝗻𝘇𝘇𝗔𝗜 is a new OWASP project, founded by Maryam Mouzarani, with the OWASP AI Exchange as founding partner. It offers an intentionally vulnerable AI application designed for security testing and education with hands-on challenges to explore real-world AI risks. At the SANS Summit, we presented "𝗛𝗮𝗰𝗸𝗶𝗻𝗴 𝗮 𝗦𝗺𝗮𝗿𝘁 𝗣𝗶𝘇𝘇𝗮 𝗣𝗹𝗮𝗰𝗲 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗢𝗪𝗔𝗦𝗣 𝗔𝗜 𝗘𝘅𝗰𝗵𝗮𝗻𝗴𝗲: 𝗣𝘄𝗻𝘇𝘇𝗔𝗜!" Participants rolled up their sleeves for a hands-on 𝗱𝗲𝗲𝗽 𝗱𝗶𝘃𝗲 𝗶𝗻𝘁𝗼 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆. Using the OWASP AI Exchange framework, attendees navigated guided attack labs to understand modern AI architectures, how they can be compromised, and how to secure them in real-world deployments. The sessions tackled critical risks head-on, including prompt injection, data poisoning, supply chain threats, and vector database vulnerabilities. A massive thank you to our phenomenal presenters who led these sessions: • Maryam Mouzarani, whose project PwnzzAI served as the core of our practical labs. As the Founding Partner of PwnzzAI, OWASP AI Exchange has been proud to support this initiative from the very beginning, and we're thrilled to share that PwnzzAI has recently been approved as an official OWASP project (a well-deserved recognition of Maryam's incredible work!) • Aruneesh Salhotra, Project Co-Lead at OWASP AI Exchange, who guided attendees through identifying, executing, and mitigating these complex AI threats. • Spyros Gasteratos, who went above and beyond by flying in at the very end to help facilitate! His energy and expertise were instrumental in accompanying the team and preparing the workshop as a true Capture The Flag (CTF) experience. We also want to commend Behnaz Karimi, who played a crucial role in validating the lab environments and developing the solutions for the various challenges. Rigorously testing and validating our assumptions is an incredibly important step, and her work ensured the labs delivered a realistic and effective learning experience! Thank you to all the 𝗮𝘁𝘁𝗲𝗻𝗱𝗲𝗲𝘀 𝘄𝗵𝗼 𝗯𝗿𝗼𝘂𝗴𝗵𝘁 𝘁𝗵𝗲𝗶𝗿 𝘀𝗸𝗶𝗹𝗹𝘀, foundational AI knowledge, and enthusiasm to the table. Building a secure AI future is a collaborative effort, and seeing everyone actively exploiting and mitigating these vulnerabilities gives us immense confidence in the road ahead. Links to PwnzzAI! will be shared in the comments. #OWASP #AIExchange #SANSAISummit #AISecurity #CyberSecurity #CTF #RedTeaming #MachineLearning #PwnzzAI #AppSec
OWASP AI Exchange
Computer and Network Security
owaspai.org : the go-to resource for AI Security, forming the core of international standards. 300 pages.
About us
The OWASP AI Exchange at owaspai.org is a collaborative working document to advance the development of global AI security standards and regulations. It provides a comprehensive overview of AI threats, vulnerabilities, and controls to foster alignment among different standardization initiatives. This includes the EU AI act, ISO/IEC 27090 (AI security), OWASP LLM top 10, and OpenCRE - which we want to use to provide the AI Exchange content through the security chatbot OpenCRE-Chat Our mission is to be the authoritative source for consensus, foster alignment, and drive collaboration among initiatives. By doing so, it provides a safe, open, and independent place to find and share insights for everyone. The Exchange is a Flagship Project at the OWASP foundation: the largest community on system security.
- Website
-
https://owaspai.org/
External link for OWASP AI Exchange
- Industry
- Computer and Network Security
- Company size
- 51-200 employees
- Type
- Nonprofit
- Founded
- 2022
Employees at OWASP AI Exchange
Updates
-
Launching the AI Exchange threat model one-pager
AI threat modelling hard? Not anymore. Today the OWASP AI Exchange releases the threat model one-pager to quickly help you identify AI security threats. It summarizes the step-by-step decision tree approach from the AI Exchange threat model section. How to use: 1. Walk by each threat 2. Base on the column ‘When’, detemine if that threat applies in theory to your AI system 3. If so, use the column ‘Impact’ to help decide whether the risk needs to be treated or not, depending on the level of harm for the use case The result: you start big, but you end up with a relativey small list of risks to focus on. For example: You don’t have to protect against model inversion attacks that try to steal your training data, if that data isn’t sensitive. It sounds obvious, but I’ve seen many cases of protections in place for threats that effectively don’t matter. Another example: If your agentic system uses an LLM, then it is in theory susceptible to indirect prompt injection: malicious instructions in untrusted data that manipulate agent behaviour. But if your only concern is that sensitive company data leaks, and there is no way for the system to send data to an attacker (e.g., email), then this threat remains theoretical. The risk does not have to be treated. This all started three years ago, with us at Software Improvement Group donating our AI threat model to open source, which became the AI Exchange, and the rest is history. A month ago we launched the Auto Threat Modeling agent. Today, we share this one-pager with the world, it will have its debut at our workshop here at the SANS Institute AI Security Summit in DC, together with Disesdi Shoshana Cox, and I'll be teaching it at OWASP Global Appsec in Vienna on June 24th. We hope you make good use of it and recommend you to also look at the great threat modeling work out there, such as 😷 Adam Shostack’s trainings, Sebastien Deleersnyder’s work, and Ken Huang’s MAESTRO. Let me know in the comments if you have experience with other initiatives. #ai #aisecurity #threatmodeling #threatmodelling
-
-
Washington DC, here we come!
At Schiphol Airport, on my way to the SANS Institute AI Summit in Washington DC — looking forward to seeing many friends and sharing some exciting announcements from the OWASP AI Exchange. On the ground, Spyros Gasteratos, Aruneesh Salhotra, Disesdi Shoshana Cox, and yours truly will be running an AI Exchange workshop track on Monday, focused on threat modeling and hands-on hacking. Maryam Mouzarani will join remotely as the creator of the PwnzzAI! lab. On Tuesday, we’ll host the AI Security Policy Forum, together with SANS: an off-site, invitation-only event with 24 seats for policy stakeholders and standardization initiatives. The real value is simple — getting the right people in one room to align on how industry and government can act strategically on both AI opportunity and risk. Paying attention to risks and guardrails may feel slower, but in practice, it’s strategy that keeps you moving at speed — ahead of those flying blind. Expect some big updates — including a brand-new one-page AI threat model guide and a platform to bring all security standards together. More to come… Looking forward to sharing all these things with the world, and hopefully meet you in DC, including some of my idols: Bruce Schneier, Gary McGraw, Julie Davila, and Rob T. Lee to celebrate our friendship and the fruitful ongoing collaboration between the AI Exchange and SANS. #ai #aisecurity
-
-
IEEE publishes the paper by two of the most prominent AI Exchange authors. Proud.
🔥 Big milestone unlocked! At last, after years of effort, I’m finally seeing the results of hard work. I’m super excited to share that our paper ( Yuvaraj Govindarajulu and I) has officially been published on #IEEE Xplore! "Ransomware Attacks on AI Systems: a Cross-Domain Threat and Control Analysis" This research dives into one of the most critical and emerging threats at the intersection of AI and cybersecurity and trust me, a LOT of hard work, late nights, and persistence went into making this happen. 👉 You can check it out here: https://lnkd.in/dHYJYVDS Honestly, this is just the beginning.More exciting work, deeper insights, and new projects are on the way. Stay tuned for what’s coming next from me and Yuvi. 😉 Aruneesh Salhotra Rob van der Veer OWASP AI Exchange #AI #CyberSecurity #Ransomware #MachineLearning #Research #IEEE #Innovativ
-
-
AI Exchange founder Rob van der Veer highlights a possible misconception about Mythos, and AI vulnerability discovery.
Mythos is not what many people think. AI vulnerability discovery hasn't suddenly made all systems transparent. Its strength mostly lies where it has visibility: when it has access to source code and binaries. In practice, that often means the external components in your system are much more a target than your proprietary software. We are clearly seeing a leap in how fast vulnerabilities can be discovered. But an important detail is often missed: this progress is largely driven by analysing how software works internally — through code review and reverse engineering. The recently published examples demonstrate this. What we do not see strong evidence of is a similar leap in external attack techniques, such as fuzzing. That doesn’t mean AI cannot do this — it can — but the step change appears to come from internal understanding rather than black-box probing. This has an important implication: 👉 If your proprietary code or binaries are not publicly accessible, AI-driven discovery threats mostly come from what IS accessible — such as open source components and third-party binaries — rather than the parts you have built yourself. This suggests that many internal systems and SaaS platforms may be less exposed than people fear in this specific sense — but at the same time, more exposed through the components they rely on. That is where the attack surface is expanding fastest, and where attention is often most needed. That said, this is not a reason to ignore your own code. Strong defence in depth remains essential: 1️⃣ harden your own code and architecture by applying zero-trust thinking to components 2️⃣ strengthen the overall system against AI-enabled attack capabilities Two caveats: - This view is based on current evidence. The contrary could theoretically be true: AI could be making a leap with testing similarly to the a leap in internal understanding. If I find contradicting evidence, you'll be the first to know. Opinions are my own - and not the views of my employer. That sort of thing. Next week in DC I will be speaking with people directly involved in the Mythos effort. - My goal is not to downplay the importance of AI in security, but to help focus effort where it has the biggest impact. What a time to be alive. #ai #security #appsec
-
-
We hope to see you in DC.
Come meet the OWASP AI Exchange in Washington DC on April 20th and 21st, for exciting workshops and the timely AI Security Policy forum, with SANS as co-host during the SANS Institute AI Summit. Humans don't download skills (except in The Matrix). We learn them by doing. And humans don't coordinate through MCP - we sit down in a room. That's why we're bringing folks to DC to fulfil OWASP® Foundation's mission: To be the global open community that powers secure software through education, tools, and collaboration. Here's what we organize: 👉 Workshop: Hands-On Threat Modelling with the OWASP AI Exchange 🗓️ April 20 — 2:30PM-4:00PM Learn from Disesdi Shoshana Cox and yours truly, through practical exercises how to quickly identify key threats to AI systems, so you know how to secure them. 👉 Workshop: Hacking a Smart Pizza Place with the OWASP AI Exchange—PwnzzAI! 🗓️ April 20 — 4:00PM-5:30PM Gain insight through hands-on AI hacking labs on your laptop from Maryam Mouzarani, Spyros Gasteratos, and AI Exchange co-lead Aruneesh Salhotra. More details soon. 👉 Policy Forum on AI Security Standardization. 🗓️ April 21 — Alongside the SANS AI Summit at a special location overlooking Washington. A closed, invitation-only gathering of selected policy stakeholders and standardization leaders, convened by the OWASP AI Exchange in partnership with SANS Institute. The goal: coordinate standardization efforts to increase collaboration and alignment, brief policy stakeholders on the AI security landscape, and work with them to provide support in AI oversight. The workshops are for conference visitors attending in-person. The awesome conference keynotes and panels can be attended online. The Policy forum is invitation-only. Hope to see you there! Thank you so much Rob T. Lee for our ongoing fruitful collaboration, and to our sponsors Straiker, Casco (YC X25), and AI Security Academy for supporting our events. Next event: The OWASP conference in Vienna with an AI Exchange showcase, AI Exchange training, and book signing. Stay tuned! (links will be in comments) #AI #AISecurity #Cybersecurity #AIgovernance OWASP® Foundation Software Improvement Group
-
-
OWASP AI Exchange reposted this
This week I joined the OWASP AI Exchange and I am genuinely honoured to sit at this table. The group brings together builders, red teamers, and policy architects who are doing the actual work of defining what AI security looks like in practice. Not in whitepapers. In production environments, under real regulatory pressure. The signal-to-noise ratio in that room is rare. My focus within the Exchange will sit at three intersections that I believe define the next frontier of enterprise security: 1. Project Agentic AI (Threats, Controls & Testing). Autonomous agents operating across banking infrastructure introduce threat vectors that traditional assurance frameworks were never designed to catch. This goes far beyond just testing. Model exploitation, decision flow hijacking, and agent abuse are not theoretical - they are already appearing in the environments I validate. We must define hard technical controls and operational risk boundaries for autonomous AI. 2. Framework Harmonization. The industry does not need more standalone guidelines; it needs execution. Mapping standards across NIST and OWASP into a unified, actionable baseline is critical. Closing this gap is the foundational work that determines whether AI governance has teeth or just terminology. 3. Regulatory Alignment (e.g. EU AI Act). I have seen what happens when governance frameworks lag behind technology adoption. With AI, the cost of repeating that pattern is unacceptable. We must ensure our controls align directly with the EU AI Act. But even if we map every control and build perfect taxonomies, we can still miss the moment a human organization quietly stops making its own decisions. That decision drift is the ultimate systemic risk. That is where I intend to press. Looking forward to contributing alongside Rob van der Veer, Aruneesh Salhotra, Behnaz Karimi, and the broader Exchange community. #OWASP #AgenticAI #AIGovernance #AIAct #DORA #CyberRisk #TechRisk #RedTeaming #OWASPOWASP AI Exchange
-
-
The new OWASP Impact Report starts with the success of the AI Exchange, explaining the history and its mission. What an honour!
Just out! Learn the gems that OWASP® Foundation is bringing to you in the impressive OWASP Impact report, celebrating 25 years. The report talks about the grown strategic role of OWASP in the security landscape. Of the many great OWASP projects, the report highlights: OWASP ASVS, OWASP CRS, OWASP CycloneDX SBOM/xBOM Standard, OWASP Dependency-Track, OWASP GenAI Security Project, OWASP SAMM, good old top 10, and the very first mention of OWASP success goes to our darling, the OWASP AI Exchange, with these very kind words: "In 2025, OWASP effectively set the standard for AI security, through the AI Exchange. The Exchange was founded in 2022 by Rob van der Veer, for writing down what he learned on security and privacy of AI systems as an AI engineer, hacker and entrepreneur since the beginning of the nineties. The goal: to help security practitioners with this important new topic, trying to make it comprehensive, but simple. Through the OWASP network, he quickly gathered a growing group of experts to continue building the body of knowledge and co-leaders Aruneesh Salhotra and Behnaz Karimi joined the project. Then Rob got involved in ISO/IEC 27090, the global standard for AI security and got elected as co-editor of prEN18282, the Security standard for the AI Act. These working groups had a hard time finding the right expertise, so Rob forged a unique liaison partnership between international standardization and the OWASP AI Exchange, allowing the material from the Exchange to be donated directly to these new standards - effectively becoming the main source. Next, the Exchange was adopted by SANS Institute, ISACA and EXIN, as a key resource for training. The material is open source, free of copyright and attribution, and aligns with standards - making it the perfect material for training and certification. So what started as a personal notebook from experience, turned into an OWASP flagship project with a framework of AI security threats, controls, and best practices that effectively has become the standard, and the go-to bookmark for practitioners to rely on." Wow. #ai #aisecurity #OWASP #appsec #security
-
Sometimes it's the little things that make you happy: cool new AI Exchange stickers are on their way! These stickers will pop up at for example: 🗓️ April 20: Our two workshops at the SANS AI summit in DC (AI threat modelling and AI hacking with the Exchange) 🗓️ April 21: The AI policy forum we organize with SANS Institute, also in DC 🗓️ May 19 NIST Supply Chain Assurance Forum at MITRE 🗓️ June 24: The masteraisecurity dot com training in Vienna 🗓️ June 25-26: OWASP Global Appsec conference in Vienna Hope to see you there and personally hand you the sticker. #ai #aisecurity
-
Check out our upated learning guide. #ai #aisecurity
How to master AI security? Check out the just updated learning guide at the OWASP AI Exchange. It is our mission to enable practitioners to make sense of it all. Just go to the Exchange website owaspai dot org. Press ‘Get started’ and you will be guided, depending on your needs: 👉 Ask any question to AI Exchange Agent 👉 Learn what the Exchange is 👉 How to start as an organization 👉 How to secure an AI system 👉 How to learn AI security To learn AI security: 1️⃣ First study the brief AI security essentials for the big picture. 2️⃣ Do high-over threat modelling according to the risk analysis section - or let AI interview you to find out, or skip this step if you want to learn the complete threat picture. 3️⃣ If you’re involved in Agentic AI, see the section of how agentic threats are covered. 4️⃣ If you run a ready-made model, have a look at the threat model on ready-made models. 5️⃣ See your threats in their context in our AI threat model. 6️⃣ Click on your threats to to get more information. 7️⃣ Check the Controls section of that threat, or the periodic table which lists the controls for every threat. 8️⃣ To learn about the bigger picture of controls, study the controls overview. 9️⃣ If privacy is in scope for you: see the privacy section. 🔟 If you’re involved in testing: see the testing section. We have collected a large table of futher training resources in our references section. I will put links in the comments, but you’ll find it anyhow. There is another way: come join the threat modelling workshop in Washington DC on April 20th, where I'll teach together with Disesdi Shoshana Cox, or join my full day ‘Master AI security’ training in-person or remote during the OWASP Appsec conference in Vienna, on June 24th. We'll go through the learning steps together, in-depth, and hands on, featuring yours truly and my Software Improvement Group co-trainers. #ai #aisecurity
-