37% of global cloud Infra using AI GPU is vulnerable to this CVE. NVIDIAScape (CVE-2025-23266) is more than a vulnerability - it’s a reminder and precaution to Cyber leaders and Boardroom. We’ve all been talking about AI risks in terms of model integrity, adversarial inputs, and data misuse. But the disclosure of NVIDIAScape is a reminder that the real fragility often lies underneath the AI stack, in the infrastructure we assume is safe. A three-line exploit can break out of NVIDIA AI containers and compromise entire GPU hosts. In a multi-tenant cloud world, that means cross-tenant breaches, leaked models, and disrupted AI services at scale. The key takeaway: AI risk is not new risk — it’s amplified risk. Speed matters: Patching and mitigating is urgent so that window for attackers is short. Board conversations must evolve. Securing AI is not only about the algorithm rather it’s about the ecosystem it runs on. Our role is to safeguard not just innovation, but the infrastructure that makes it possible. #AI #RiskManagement #CloudSecurity #CVE202523266 https://lnkd.in/g4Ybk7eN
NVIDIA CVE-2025-23266: AI GPU Vulnerability Exposes Cloud Infrastructure
More Relevant Posts
-
AMD signs AI chip supply deal with OpenAI, shares surge over 34% AMD OpenAI Forrest Norrod Leah R. Bennett, CFA #CybersecurityNewsCentral #AI #Cybersecurity #CybersecurityMarket #CybersecurityNews https://lnkd.in/dAXu-kex
To view or add a comment, sign in
-
OpenAI has partnered with AMD in a five-year agreement to equip its Stargate AI project with thousands of MI450 GPUs, strengthening infrastructure for large-scale AI development Stay updated with the latest trends shaping the technology industry by following Mexico Tech & Cybersecurity! #AI #OpenAI #AMD #AIHardware #GPUs #TechInnovation #MachineLearning #StargateProject #AIInfrastructure #Nvidia https://lnkd.in/eYPdMVi9
To view or add a comment, sign in
-
Nvidia's DGX Spark, a compact AI supercomputer, is enabling developers to run complex AI models locally, potentially accelerating the development of AI agents and advanced software stacks. This shift could lead to rapid innovation but also increases the attack surface for AI-related vulnerabilities. Organizations must prioritize securing AI development environments and models against emerging threats as local AI capabilities expand. 💥 #ai #cyberattack #cybersecurity https://lnkd.in/grtey4ki
To view or add a comment, sign in
-
The Infrastructure That Powers AI Could Also Break It If we secure the infrastructure powering AI with the same discipline we apply to cloud, we can stay ahead of risk while keeping innovation moving fast. When it comes to securing AI, most conversations focus on the model layer: Prompt injection, training data leakage and unsafe outputs. But there’s a more immediate risk that often goes overlooked: the infrastructure powering those models. AI workloads rely on the same foundations as modern cloud native applications. That means containers, Kubernetes, shared GPU nodes and orchestration layers that were never designed with AI-specific risks in mind. And because these components are being reused at scale, any vulnerability in the stack has the potential to cascade across multiple platforms and users. As researchers focused on breaking AI infrastructure to make it safer, we’ve seen firsthand how these risks are already showing up in real-world environments. https://lnkd.in/eJk-J2YH Stay Connected to Sidharth Sharma, CPA, CISA, CISM, CFE, CDPSE for content related to Cyber Security. #CyberSecurity #JPMC #Technology #InfoSec #DataProtection #DataPrivacy #ThreatIntelligence #CyberThreats #NetworkSecurity #CyberDefense #SecurityAwareness #ITSecurity #SecuritySolutions #CyberResilience #DigitalSecurity #SecurityBestPractices #CyberRisk #SecurityOperations
To view or add a comment, sign in
-
🚨 Breaking: Intel & AMD TEEs just got cracked. The “trusted enclaves” that secured billions in AI & crypto? Shown vulnerable to physical side-channel attacks. Most projects built entirely on TEEs are now exposed. At ZkAGI, we planned for this moment. At zkAGI, we leverage a combination of Privacy-Enhancing Technologies (PETs) — including zk-proofs, TEEs, encryption, federated learning, and differential privacy. We design AI infrastructure by carefully weighing trust assumptions: sometimes we lean on math (ZK), sometimes on game theory (staking + governance), and sometimes on physics/engineering (TEEs, GPU isolation, buffer clearing). ⸻ ⚖️ Our Design Mandate No matter the use case, we don’t compromise on three pillars: • Performance → agents must be fast enough for real-world adoption. • Privacy → sensitive data stays protected, whether medical, financial, or enterprise. • Verifiability → outputs can be proven correct, not just assumed. ⸻ 🛠 Pragmatic PETs by Pipeline Stage Not every technology fits everywhere. We apply the right PETs for the right context: • Inference (e.g., healthcare pre-auth): TEEs + encryption → fast, private computations. • Analytics (classification/regression in healthcare, trading predictions): zkML for proof-of-correctness on high-value outputs. • Federated learning (edge/home/industrial devices): local training + differentially private updates + ZK verification of aggregation. • Strategy-sensitive workloads (autonomous trading): TEEs with sealed keys, plus selective ZK proofs of performance. ⸻ 🔒 TEEs and the Recent Vulnerabilities We are fully aware of the newly published side-channel and physical attack vectors against Intel SGX and AMD SEV. These disclosures confirmed what we’ve always believed: no single PET should be relied on in isolation. This is why zkAGI uses Oasis ROFL as a critical layer for key management and execution integrity: • Ephemeral keys → even if an enclave is compromised, past secrets remain safe. • On-chain governance controls → key managers must be staked and attested. • Proactive engineering → ROFL was designed with the exact new threat models in mind, shielding us where others failed. In practice: enclaves give us speed, but zk-proofs, encryption, and governance keep the system verifiable and resilient even when hardware guarantees degrade. ⸻ 📖 Open Source, Open Trust Another line of defense: our code is open source. Unlike black-box infra providers, our stack can be audited by anyone. This transparency builds trust far beyond marketing promises. ⸻ 🌀 The zkAGI Balance • Pure TEEs are fragile. • Pure ZK is still too heavy. • Pure anonymization is too weak. zkAGI balances performance, pragmatism, and privacy + verifiability — across healthcare, trading, and edge AI. Our Design choices also periodically review new PETs and incorporate them on their merits. ⚡ ⚡ This way, in varying threat scenarios, zkAGI still stands.
To view or add a comment, sign in
-
Imagine someone stealing your proprietary AI model weights - not by hacking your code, but by whispering over the data lines in your GPU rack. In the AI era, security has a new meaning. It’s no longer just about protecting data - it’s about protecting the compute itself. GPUs have become the most valuable assets in modern data centers - powering proprietary models, training sensitive datasets, and holding the intellectual property that defines an organization’s edge. But here’s the catch: in many shared environments, GPUs are still treated as just another pooled resource. Without proper isolation, enterprises face real risks - from data leakage to model theft and even side-channel attacks. If you’re exploring this space, here’s a great read that highlights the evolving threat landscape for cloud-based GPUs: https://lnkd.in/djmshGUx At TrndX, we believe security-by-design is central to GPU compute services. An architecture that enforces hardware-level workload separation, ensuring that models and data stay protected - without compromising on performance is of paramount importance. What do you think of the changing security needs in the compute space? #AIInfrastructure #GPUSecurity #CloudSecurity #ConfidentialComputing #DataCenter #AICompute #ZeroTrust #SecureByDesign #FutureOfAI #TechLeadership
To view or add a comment, sign in
-
-
Google says quantum computing will go commercial in five years. Nvidia says it's more like 20. Who's right? That depends on how you define “real-world use.” Google's Willow chip is impressive—it solved a problem in minutes that would take a classical supercomputer longer than the age of the universe. But turning that into something useful for medicine, energy, or cybersecurity is still a big leap. The hype is real. So are the technical hurdles. Quantum's future is bright—but it's not here just yet. #QuantumComputing #Cybersecurity
To view or add a comment, sign in
-
Bridging the AI Frontier: Deloitte Middle East’s Silicon-to-Service Initiative A New Era of AI Sovereignty In a pivotal move for the Middle East’s digital landscape, Deloitte has launched its Silicon-to-Service (S2S) offering, marking a significant push towards enhanced artificial intelligence (AI) adoption within the region. This innovation, powered by leading technologies from Dell and NVIDIA, […] https://lnkd.in/d-KzbH93 Cyber Warriors Middle East #CyberWarriorsMiddleEast #CyberWarriorsConclave #CWME #CWC #Cybersecurity #CyberThreats #MitigatingCyberAttacks #MiddleEast #UAE #Dubai
To view or add a comment, sign in
-
The Infrastructure That Powers AI Could Also Break It When it comes to securing AI, most conversations focus on the model layer: Prompt injection, training data leakage and unsafe outputs. But there’s a more immediate risk that often goes overlooked: the infrastructure powering those models. AI workloads rely on the same foundations as modern cloud native applications. That means containers, Kubernetes, shared GPU nodes and orchestration layers that were never designed with AI-specific risks in mind. And because these components are being reused at scale, any vulnerability in the stack has the potential to cascade across multiple platforms and users. Stay connected for industry’s latest content – Follow Deepthi Talasila #DevSecOps #ApplicationSecurity #AgenticAI #CloudSecurity #CyberSecurity #AIinSecurity #SecureDevOps #AppSec #AIandSecurity #CloudComputing #SecurityEngineering #ZeroTrust #MLSecurity #AICompliance #SecurityAutomation #SecureCoding #linkedin #InfoSec #SecurityByDesign #AIThreatDetection #CloudNativeSecurity #ShiftLeftSecurity #SecureAI #AIinDevSecOps #SecurityOps #CyberResilience #DataSecurity #SecurityInnovation #SecurityArchitecture #TrustworthyAI #AIinCloudSecurity #NextGenSecurity https://lnkd.in/gtTYpfDw
To view or add a comment, sign in
-
🌎 Unleashing AI's potential for the public good. In collaboration with NVIDIA, MITRE's AI initiatives are accelerating innovation across various government agencies. Discover how the Federal AI Sandbox, leveraging NVIDIA technologies and platforms, is streamlining and addressing challenges in weather forecasting, cybersecurity, public benefits administration, and more. #NVIDIADGX 🔗 Read the full customer story now: https://bit.ly/4ov8I8i
To view or add a comment, sign in
More from this author
Explore related topics
- Risks of AI in Security Management
- How to Secure AI Infrastructure
- Understanding Security Risks of AI Outputs
- Understanding Security Risks of AI Coding Assistants
- Risks of Uncontrolled AI Infrastructure
- Insights From AI Vulnerabilities
- Best Practices for Securing AI Workloads in the Cloud
- Risks of Overlooking AI Inequalities