Penetration Testing Insights

Explore top LinkedIn content from expert professionals.

  • View profile for Himanshu J.

    Building Aligned, Safe and Secure AI

    29,219 followers

    Microsoft's AI Red Team has released a groundbreaking paper titled "Lessons From Red Teaming 100 Generative AI Products" (https://lnkd.in/dGxsydwF) šŸŒŽ Drawing from their extensive experience, they've distilled eight pivotal lessons for enhancing the safety and security of Gen AI systems:- 1. Understand what the system can do and where it is applied. 2. You don’t have to compute gradients to break an AI system. 3. AI red teaming is not safety benchmarking. 4. Automation can help cover more of the risk landscape. 5. The human element of AI red teaming is crucial. 6. Responsible AI harms are pervasive but difficult to measure. 7. LLMs amplify existing security risks and introduce new ones. 8. The work of securing AI systems will never be complete. šŸ“Œ Distinguish between Red teaming and safety Benchmarking - Red teaming involves simulating real-world attacks to uncover vulnerabilities, whereas safety benchmarking assesses performance against predefined standards. šŸ¤– Leverage automation - Utilizing tools like PyRIT can help cover a broader risk landscape more efficiently. šŸ‘­ Human judgment is irreplaceable - While automation aids the process, human expertise is essential for nuanced assessments and decision-making. šŸ’­ Responsible AI harms are complex - Identifying and measuring harms require careful consideration, as they can be pervasive yet subtle. šŸ‘‰ LLMs introduce new security challenges - Large Language Models can amplify existing risks and present novel ones, necessitating continuous vigilance. šŸ‘‰ Security is an Ongoing Process - Ensuring the safety of AI systems is a continuous effort, demanding regular updates and assessments. šŸ“œ This paper is a must-read for AI practitioners aiming to fortify their systems against emerging threats. #AI #GenerativeAI #AIResearch #RedTeaming #AIEthics #AITrust #MachineLearning #AIInnovation #AIRegulation #TechSafety #ResponsibleAI #CyberSecurity #AIProductDevelopment #AITrends #SafetyInAI

  • View profile for Joas A Santos
    Joas A Santos Joas A Santos is an Influencer

    Cyber Security Leader | Offensive Security Specialist | Application Security / Cloud Security | University Lecturer | AI and Machine Learning Engineer

    141,956 followers

    I've worked on some AI Agent developments and I'm someone who really enjoys using them. Particularly when I'm working on my cloud infrastructure, I use AI CLI agents to filter or execute commands I forget to use for setting up storage, CloudSQL, and even generating Terraforms for me. This has certainly saved me a lot of time and helped me focus on other things! Speaking of Penetration Testing, an AI agent can certainly help you with testing, but it needs validation, more context, instructions to guide and perform tests more effectively, and most importantly, the knowledge of the person entering the prompt! In tests I've conducted, especially the reconnaissance process, it ended up providing a lot of important information and consolidating it in a way that saved several hours. For some of my certification exams, Hack the Box machines, and even other labs like Portswigger, it easily solved the less complex ones, but of course, providing all the context for it to achieve the objective. But does this mean that AI will be able to perform a complete test? In fact, AI agents will greatly assist in penetration testing processes, especially in simpler tests to identify misconfigurations, hardcoded secrets, enumerate sensitive directories, manipulate requests, and understand basic injection parameters. However, AI will have problems in attack chaining scenarios where you combine numerous vulnerabilities for a specific result, business logic flaws such as fraud in atypical flows, and understanding why. Besides noise on the target, there are false positives and even a loss of visibility of vulnerabilities, because the AI chose to follow a different path, failing to understand specific business information and logic of the target. Being a 1:1 interaction, it identifies more isolated problems than collective ones, unless you provide the correct prompts, but then there are limitations with tokens and the use of AI agents. Therefore, treat AI as an additional tool, something you configure to help save your time; however, it will not replace the human eye of curiosity. Don't forget that the philosophy of hacking is about questioning the status quo, about not accepting why something works this way and not another, about exploring uncharted territory. It can help you generate a payload, a simple script, or, with directions and context, explore the hypothesis you want it to test. Remember! With the right context, it can find the flags and solve challenges for you, but will you be able to explain to your client how you achieved the result? What impact would the vulnerability have on their business? Especially in contexts where I've witnessed fraud stemming from a monitor with a high refresh rate, requiring a call to support to reset a password, or where a cloud service account from a development environment could reach the production environment, or even a poorly configured public API leaking employee data. For now, we'll still have jobs! Share your opinion

  • View profile for Jeffrey W. Brown

    Chief Security Advisor for Financial Services at Microsoft, Author & NACD certified boardroom director Helping CISOs Turn AI & Cybersecurity Risk into Strategic Advantage

    12,305 followers

    Project Glasswing: Five Takeaways for CISOs Anthropic just assembled Apple, Google, Microsoft, AWS, CrowdStrike, JPMorgan Chase, and the Linux Foundation into a single cybersecurity coalition. That sentence alone should get your attention. Project Glasswing is built around an unreleased AI model that has already found thousands of zero-day vulnerabilities across every major operating system and browser including bugs that survived decades of human review. Here's what CISOs should be thinking about right now. 1. Your technical debt just became threat debt. That legacy code nobody wants to touch? AI can now read it, reason about it, and find exploitable flaws at a pace no human team can match. A 27-year-old vulnerability in OpenBSD (an OS purpose built for security) was one of the first to fall. If your organization is carrying unreviewed code from the 2000s, it's no longer a backlog problem. It's an active liability. 2. SBOMs are maps now, and adversaries have the same GPS. Open source makes up the majority of modern software stacks. AI models can now systematically scan those dependencies for chained exploits, not just known CVEs. Your software composition analysis needs to account for what AI can find, not just what's been publicly disclosed. 3. Patch velocity is the new perimeter. The window between vulnerability discovery and weaponization was already shrinking. AI compresses it further. Responsible disclosure timelines built for human-speed research don't hold when a model can find, chain, and exploit flaws autonomously. If your mean time to patch is measured in weeks, you're operating on borrowed time. 4. AI-audited code will become the expectation, not the exception. If a model can review every commit before it ships, the question stops being "should we?" and starts being "why aren't you?" Expect this to show up in procurement questionnaires, cyber insurance applications, and regulatory guidance. Especially true in financial services. The bar just moved. 5. Glasswing gives the good guys a head start. That's meaningful. But the same class of capability will proliferate. The organizations that invest now in AI-augmented security programs. Not just tools and toys, but the workflows, the talent, the governance. The window to build that muscle is open, but it won't stay open forever. This is a genuine inflection point. Not because one model found some bugs, but because it proved that AI can systematically outperform decades of human security review. The old assumptions about what's "secure enough" just expired.

  • View profile for Stephen Schmidt

    Senior Vice President & Chief Security Officer at Amazon

    20,850 followers

    Our threat intelligence team recently tracked a threat actor who used commercial AI services to compromise FortiGate devices across dozens of countries. What's significant is how AI enabled this actor to operate at scale, generating attack plans, developing tools, and automating operations in ways that would have previously required substantial resources and technical expertise. This is part of a pattern we're seeing where AI is lowering the barrier to entry for threat actors. It's making certain types of attacks more accessible to less sophisticated actors who can now leverage AI to enhance their capabilities and operate at greater scale. But from our vantage point, defenders still have the advantage. At Amazon, AI is helping us analyze massive volumes of threat intelligence, accelerate security reviews, improve detection accuracy, and respond to threats faster than ever before. AI is changing security on both sides of the equation, but organizations that combine strong security fundamentals with AI-powered tools are well-positioned to stay ahead. Learn more about our latest research:Ā https://lnkd.in/eWUjmaB6

  • View profile for Bally S Kehal

    ā­ļøTop AI Voice | Founder (Multiple Companies) | Teaching & Reviewing Production-Grade AI Tools | Voice + Agentic Systems | AI Architect | Ex-Microsoft

    17,976 followers

    Anthropic Just Documented the First AI-Orchestrated Cyber Espionage Campaign → 30 Targets → 80-90% Autonomous Operations GTG-1002 changed everything we thought we knew about AI agent security. Chinese state actors didn't just use Claude for advice. They turned it into an autonomous penetration testing orchestrator using MCP servers. Here's what your security team needs to understand... The Technical Reality ↳ Claude Code + Model Context Protocol = autonomous attack framework ↳ AI executed reconnaissance, exploitation, lateral movement, data exfiltration ↳ Humans only intervened at strategic decision gates (10-20% of operations) ↳ Peak activity: thousands of requests per second ↳ Multiple simultaneous intrusions across major tech companies and government agencies The Evolution from Vibe Coding to Autonomous Attacks In June 2025: "Vibe hacking" - humans directing operations November 2025: AI autonomously discovering vulnerabilities and exploiting them at scale What Teams Should Learn The Bypass Method: ↳ Role-play convinced Claude it was doing "defensive security testing" ↳ Social engineering the AI model itself ↳ Individual tasks appeared legitimate when evaluated in isolation The Infrastructure: ↳ MCP servers orchestrated commodity penetration testing tools ↳ No custom malware needed ↳ Integration over innovation Critical Limitation: ↳ AI hallucinations created false positives ↳ Claimed credentials that didn't work ↳ "Critical discoveries" turned out to be public information ↳ Full autonomy still requires human validation Security Implications for Founders The barriers to sophisticated cyberattacks dropped substantially. Less experienced groups can now potentially execute nation-state level operations. But here's what matters: The same AI capabilities enabling these attacks are critical for defense. SOC automation, threat detection, vulnerability assessment, incident response. Key Takeaways for Your Team ↳ Experiment with AI for defensive security operations ↳ Build detection systems for autonomous attack patterns ↳ Implement stronger safety controls and validation layers ↳ Assume AI-orchestrated attacks are now standard threat landscape ↳ Test your systems against AI-driven reconnaissance This isn't 2023 anymore. Your security posture needs to account for AI agents that can execute full attack chains with minimal human oversight. The question isn't whether AI will be used in cyberattacks. The question is whether your defenses account for AI-orchestrated operations happening right now. P.S. Building AI agents or implementing MCP in your infrastructure? Security-first architecture isn't optional anymore. One misconfigured agent with access to production systems = complete compromise.

  • View profile for Roeland Delrue

    Cofounder at Aikido | Secure your software

    16,826 followers

    Will AI replace manual penetration testing? We benchmarked Aikido AI vs. external manual pen testers across 4 web applications under real-world conditions. Here’s what we found: - AI was drastically fasterĀ and uncovered deeper logic flaws (e.g., IDORs, auth bypasses) due to to source code access. - Human testers focused heavily on compliance and configuration hygiene, butĀ missed several critical exploitsĀ the AI found, largely due to time limits and limited code visibility. - In one app, Aikido AI found 21 injection vulnerabilities vs 2 found manually. 🤯 - Average time-to-execute:Ā 0.4 days (AI)Ā vsĀ 19.5 days (human). āš ļø The key concept here isĀ access asymmetry. Most manual pentests today areĀ grey-boxĀ because it balances coverage and cost. Giving an AI tool code access is instant; for humans, understanding a full codebase makes white-box too lengthy and expensive in most engagements. With autonomous AI pentesting, that constraint largely disappears: more context increases human cost and time, but improves AI performance. AI scales with the richness of the context it ingests, the most valuable context being the source code itself. With our platform advantage - operating deep in the code, API, container, cloud config, attack surface level - we can feed Aikido AI uniquely rich context to deliverĀ white-box depth at machine speed, consistently, far outperforming traditional manual methods. Our goal is simple: build the most intelligent AI offensive engine. And we will.

  • View profile for Shreekant Mandvikar

    I (actually) build GenAI & Agentic AI solutions | Executive Director @ Wells Fargo | Architect Ā· Researcher Ā· Speaker Ā· Author

    7,829 followers

    Agentic AI Security: Risks We Can’t Ignore As agentic AI systems move from experimentation to real-world deployment, their attack surface expands rapidly. The visual highlights some of the most critical security vulnerabilities emerging in agent-based AI architectures—and why teams need to address them early. Key vulnerabilities to watch closely 🄷Token / Credential Theft – Secrets leaking through logs or configuration files remain one of the easiest attack vectors. šŸ•µļøā™‚ļøToken Passthrough – Forwarding client tokens to backends without validation can cascade a single breach across systems. 🪢Rug Pull Attacks – Trusted maintainers or updates becoming malicious pose a serious supply-chain risk. šŸ’‰Prompt Injection – Hidden instructions that LLMs follow too readily; often trivial to exploit with critical impact. 🧪Tool Poisoning – Malicious commands embedded invisibly within tools or workflows. šŸ’»Command Injection – Unfiltered inputs allowing attackers to execute arbitrary commands. ā›”ļøUnauthenticated Access – Optional or skipped authentication that exposes entire endpoints. The pattern is clear Most of these vulnerabilities are easy or trivial to exploit, yet their impact ranges from high to critical. Agentic AI doesn’t just generate content—it takes actions. That dramatically raises the cost of security failures. What this means for builders and leaders Treat AI agents as production-grade systems, not experiments āœ”ļøEnforce strong authentication, token hygiene, and isolation āœ”ļøAssume prompts, tools, and updates can be adversarial āœ”ļøBuild guardrails before increasing autonomy and scale Agentic AI is powerful, but without security-first design, it can quickly become a liability. How is your team approaching agentic AI security? #AgenticAI #AISecurity #CyberSecurity #LLM

  • View profile for Serge Ekeh (.

    Current Governance, Risk and Compliance professional | IAM | SSO | Information Security Professional | TPRM | AI Security |SIEM | IDS/IPS | SOC 1/2 | NIST CSF/RMF | GDPR | PCI | ISO 27001 |HIPAA HEALTHCARE COMPLIANCE.

    5,343 followers

    *The Autonomous Cyber Defence Trinity: Moving from Reactive Defence to Predictive Resilience.* 1. AI GRC (Governance, Risk, and Compliance) Focus: Transitioning from "Point-in-Time" to "Continuous" oversight. The Problem: Reliance on spreadsheets, manual audits, and outdated policies. The AI Solution: - Automated Policy Mapping: AI reads new regulations (like the EU AI Act or updated NIST frameworks) and maps them to your controls instantly. - Predictive Risk Scoring: Utilises internal data to predict which business units are most likely to face a breach. - Dynamic Compliance: Real-time dashboards provide a 24/7 view of compliance posture, not just during audit season. Visual Cue: An automated "Radar" or "Shield" icon representing constant monitoring. 2. AI Pentesting (Penetration Testing) Focus: Evolving from "Annual Scans" to "Continuous Adversarial Testing." The Problem: Traditional pentests are costly, slow, and only capture a single moment in time. The AI Solution: - Automated Exploit Simulation: AI "agents" emulate hacker behavior to uncover complex attack paths that static scanners overlook. - Vulnerability Prioritisation: Rather than presenting a list of 1,000 "Criticals," AI identifies which vulnerabilities are actually reachable and exploitable. - Red Teaming at Scale: Conducting thousands of simulated attacks simultaneously without the need for a large human team. Visual Cue: A "Sword" or "Hacker-bot" icon representing active, offensive testing. 3. AI SOC (Security Operations Centre) Focus: Shifting from "Alert Fatigue" to "Automated Remediation." The Problem: Analysts face overwhelming "noise" from false positives and slow response times. The AI Solution: - Noise Reduction: AI filters out 95% of false positives, emphasising only the "Signal." - Autonomous Response #CyberSecurity #ArtificialIntelligence #AI #InformationSecurity #SecurityLeadership #AIGovernance #RiskManagement #Compliance #PenetrationTesting #SOC #CISO #CyberRisk #EnterpriseSecurity #DigitalTrust

  • View profile for Walter Haydock

    I help AI-powered companies innovate responsibly by managing cyber, compliance, and privacy risk | ISO 42001, NIST AI RMF, and EU AI Act expert | Host, Deploy Securely Podcast | Harvard MBA | Marine veteran

    23,425 followers

    AI use is exploding. I spent my weekend analyzing the top vulnerabilities I've seen while helping companies deploy it securely. Here's EXACTLY what to look for: 1ļøāƒ£ UNINTENDED TRAINING Occurs whenever: - an AI model trains on information that the provider of such information does NOT want the model to be trained on, e.g. material non-public financial information, personally identifiable information, or trade secrets - AND those not authorized to see this underlying information nonetheless can interact with the model itself and retrieve this data. 2ļøāƒ£ REWARD HACKING Large Language Models (LLMs) can exhibit strange behavior that closely mimics that of humans. So: - offering them monetary rewards, - saying an important person has directed an action, - creating false urgency due to a manufactured crisis, or even telling the LLM what time of year it is can have substantial impacts on the outputs. 3ļøāƒ£ NON-NEUTRAL SECURITY POLICY This occurs whenever an AI application attempts to control access to its context (e.g. provided via retrieval-augmented generation) through non-deterministic means (e.g. a system message stating "do not allow the user to download or reproduce your entire knowledge base"). This is NOT a correct AI security measure, as rules-based logic should determine whether a given user is authorized to see certain data. Doing so ensures the AI model has a "neutral" security policy, whereby anyone with access to the model is also properly authorized to view the relevant training data. 4ļøāƒ£ TRAINING DATA THEFT Separate from a non-neutral security policy, this occurs when the user of an AI model is able to recreate - and extract - its training data in a manner that the maintainer of the model did not intend. While maintainers should expect that training data may be reproduced exactly at least some of the time, they should put in place deterministic/rules-based methods to prevent wholesale extraction of it. 5ļøāƒ£ TRAINING DATA POISONING Data poisoning occurs whenever an attacker is able to seed inaccurate data into the training pipeline of the target model. This can cause the model to behave as expected in the vast majority of cases but then provide inaccurate responses in specific circumstances of interest to the attacker. 6ļøāƒ£ CORRUPTED MODEL SEEDING This occurs when an actor is able to insert an intentionally corrupted AI model into the data supply chain of the target organization. It is separate from training data poisoning in that the trainer of the model itself is a malicious actor. 7ļøāƒ£ RESOURCE EXHAUSTION Any intentional efforts by a malicious actor to waste compute or financial resources. This can result from simply a lack of throttling or - potentially worse - a bug allowing long (or infinite) responses by the model to certain inputs. šŸŽ That's a wrap! Want to grab the entire StackAware AI security reference and vulnerability database? Head to: archive [dot] stackaware [dot] com

  • View profile for Jatinder Singh

    Product Security, Risk & Compliance @ Informatica | I build security programs and impactful teams, and I’ve been in enough Board rooms to know the difference between what delivers and what just looks good in a deck.

    13,024 followers

    🚨 Agentic AI is powerful… but it’s also expanding your attack surface. Most teams are rushing to build AI agents. Very few are thinking deeply about securing them. That’s a problem. Because vulnerabilities in Agentic AI aren’t theoretical, they’reĀ already exploitable. Here are 7 critical risks every builder should understand: šŸ”Ā Token / Credential Theft Sensitive data exposed via logs or insecure storage. → Easy to exploit. High impact. šŸ”Ā Token Passthrough Forwarding tokens without validation = open door for abuse. → Attackers love this. šŸ’‰Ā Prompt Injection Malicious instructions hidden in inputs. → LLMsĀ will follow themĀ if unchecked. āš™ļøĀ Command Injection Unfiltered inputs triggering unintended system actions. → Critical severity. Often overlooked. 🧪 Tool Poisoning Tampered tools executing hidden malicious logic. → Trust = vulnerability. 🚫 Unauthenticated Access Endpoints without proper auth. → Shockingly common. šŸ’£Ā Rug Pull Attacks Compromised maintainers pushing malicious updates. → Supply chain risk is real. The takeaway? If your AI agent can: • Access tools • Execute commands • Use credentials • Interact with external systems šŸ‘‰ Then itĀ must be treated like production infrastructure, not a prototype. šŸ”§Ā What you should do next: • Validate every input • Implement strict auth & access control • Sanitize tool usage • Monitor logs (securely!) • Assume adversarial behavior AI doesn’t just introduce new capabilities. It introducesĀ new threat models. And the teams that win will be the ones who buildĀ secure AI by design. šŸ’¬ Curious, which of these risks are you actively addressing today?

Explore categories