Dear AI and Cybersecurity Auditors, AI changes how risk enters your environment and expands your attack surface. Traditional cybersecurity controls no longer cover model behavior, training data, prompts, agents, and AI-driven decisions. This draft extends NIST CSF 2.0 into AI systems. It treats models, data, prompts, agents, and AI decisions as real cyber assets. It also addresses how attackers already use AI to scale speed, deception, and impact. Here is why this framework matters for security, risk, and audit leaders. 📌 AI expands the attack surface beyond infrastructure into training data, models, prompts, agents, and third-party AI services 📌 Governance shifts from IT ownership to enterprise accountability with clear risk ownership, oversight, and decision authority 📌 Traditional controls still apply, but AI requires added focus on model integrity, data provenance, output reliability, and human oversight 📌 The framework maps AI risk directly to CSF functions so teams avoid parallel AI security programs 📌 Defensive teams use AI to reduce alert fatigue, improve detection accuracy, and support faster incident response 📌 Adversaries already use AI for phishing, malware generation, social engineering, and automated attack orchestration 📌 Continuous monitoring extends beyond systems into model drift, hallucinations, and unexpected behavior 📌 Risk tolerance must account for AI failure modes, not only system outages or data loss 📌 Audit and assurance teams gain a structured way to test AI controls across Secure, Defend, and Thwart focus areas 📌 The profile supports assessment, control design, and executive reporting without adding unnecessary complexity AI security fails when teams treat AI as software. NIST IR 8596 reframes AI as a risk domain inside cybersecurity. If your organization builds, buys, or relies on AI, this profile gives you a practical path to govern, secure, and defend it with intent. #NIST #Cybersecurity #AIGovernance #AIRisk #AIControls #ITAudit #CyberRisk #AISecurity #GRC #CSF #CyberVerge ♻️ Share this with your team or repost so more professionals. 👉Follow Nathaniel Alagbe for more.
How AI Can Improve Cyber Risk Management
Explore top LinkedIn content from expert professionals.
Summary
AI is transforming cyber risk management by helping organizations spot, defend against, and manage new digital threats much faster and with greater accuracy. This means companies can use AI to detect vulnerabilities, prevent attacks, and address risks that traditional cybersecurity tools might miss, especially as AI expands what is considered a potential risk.
- Expand risk coverage: Treat AI components like models, data, and prompts as critical assets and include them in your security and risk assessments.
- Build proactive defense: Use AI to automatically scan for threats and alert teams about unusual activity so risks are caught before they cause harm.
- Strengthen governance: Set up clear processes to validate AI-generated outputs and add checkpoints to prevent errors and misinformation from impacting key business decisions.
-
-
We're at an inflection point around cybersecurity right now. Threats have become so complex and fast-moving that human analysts - no matter how skilled - can't keep pace with the volume of signals that need processing. By the time we react, we're already behind. AI can now process vast volumes of external risk data to proactively identify vulnerable users or assets—before a breach occurs, not during an attack or after the damage is done. Rather than relying on reactive alerts, autonomous systems can detect emerging patterns that indicate threat actors may be profiling you. Instead of applying one-size-fits-all security policies, AI delivers dynamic, personalized protection based on each user’s unique risk profile—preventing incidents before they happen and dramatically reducing response times when they do occur. We're moving toward a world where AI agents continuously manage risk in the background, giving security teams a superhuman ability to see around corners. The question is how quickly organizations can adapt to this new reality where proactive beats reactive every time.
-
The National Institute of Standards and Technology (NIST) has released a draft of its “Cybersecurity Framework Profile for Artificial Intelligence” (open for public comment until Jan 30, 2026) to help organizations think about how to strategically adopt AI while addressing emerging cybersecurity risks that stem from AI’s rapid advance. Building on the #NIST Cybersecurity Framework 2.0, the Cyber AI Profile translates well-established risk management concepts into AI-specific cybersecurity considerations, offering a practical reference point as organizations integrate AI into critical systems and confront AI-enabled threats. The Cyber AI Profile centers on three focus areas: • Securing AI systems: identifying cybersecurity challenges when integrating AI into organizational ecosystems and infrastructure. • Conducting AI-enabled cyber defense: identifying opportunities to use AI to enhance cybersecurity, and understanding challenges when leveraging AI to support defensive operations. • Thwarting AI-enabled cyberattacks: building resilience to protect against new AI-enabled threats. The Profile complements existing NIST frameworks (CSF, AI RMF, RMF) by prioritizing AI-specific cybersecurity outcomes rather than creating a standalone regime.
-
Today, NIST released the initial preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), a community profile built on NIST CSF 2.0 to help organizations manage cybersecurity risk in an AI-driven world. A key section of this draft is Section 2.1, which introduces three Focus Areas that explain how AI and cybersecurity intersect in practice: 1. Securing AI System Components (Secure) AI systems introduce new assets that must be secured; models, training data, prompts, agents, pipelines, and deployment environments. This focus area emphasizes treating AI components as first-class cybersecurity assets, integrating them into governance, risk assessments, protection controls, and monitoring processes. It reinforces that AI risk should not be siloed from enterprise cybersecurity risk management. 2. Conducting AI-Enabled Cyber Defense (Defend) AI is not just something to protect, it is also a powerful defensive capability. This area focuses on using AI to enhance detection, analytics, automation, and response across security operations. At the same time, it recognizes the risks of over-reliance on automation, model integrity concerns, and the need for human oversight when AI supports security decision-making. 3. Thwarting AI-Enabled Cyber Attacks (Thwart) Adversaries are increasingly using AI to scale phishing, evade detection, and automate attacks. This focus area addresses how organizations must anticipate and counter AI-enabled threats by building resilience, improving detection of AI-driven attack patterns, and preparing for a rapidly evolving threat landscape where AI is weaponized. Why This Matters Together, Secure, Defend, and Thwart provide a practical structure for aligning AI initiatives with existing cybersecurity programs. By mapping AI-specific considerations to CSF 2.0 outcomes (Govern, Identify, Protect, Detect, Respond, Recover), the Cyber AI Profile helps organizations integrate AI security into familiar risk management practices. This is a preliminary draft, and NIST is seeking public feedback through January 30, 2026. If your organization is building, deploying, or defending with AI, now is the time to review and contribute. 🔗 https://lnkd.in/e-ETZXH8
-
AI can generate information that sounds accurate but is completely wrong. AI hallucinations can undermine trust in reporting, introduce compliance exposure, and create financial or operational losses. They can also surface sensitive data or misinform decisions that affect capital allocation, investor communication, and audit readiness. AI hallucinations are not a signal to slow down innovation. They are a signal to strengthen your governance and controls. With a thoughtful risk management approach, leaders can understand uncertainty and build a more confident, resilient AI strategy. Considerations for leaders to reduce AI hallucination risk: 1. Create a validation and review process for AI generated financial outputs. Leaders must ensure that any AI generated forecasts, variance analyses, reconciliations, or narrative summaries have structured validation for source accuracy and logic. 2. Strengthen compliance and regulatory controls within AI workflows. AI hallucinations can create errors that lead to noncompliance and regulatory exposure. Leaders can embed compliance checkpoints into AI driven processes to avoid misstatements, inaccurate filings, or unintended disclosure. 3. Prioritize data governance using high quality, company specific data to reduce the risk of fabricated or inaccurate outputs. This is critical for forecasting, scenario modeling, and automated reporting. 4. Use retrieval augmented generation and automated reasoning for workflows. Pairing these methods anchors AI generated analysis in verified data sources rather than probability-based guesses. 5. Enable filtering and moderation tools to block misleading or irrelevant results. Teams cannot work from flawed or unverified outputs. Filters help prevent misleading content from entering critical workflows or influencing decisions. AI is gaining traction. Now is the time to formalize your AI risk mitigation approach. Start the discussion within your leadership team today. Identify where AI is already influencing decision-making, assess your current controls, and define the safeguards you need next. #RiskManagement #AI #Leaders
-
🚨 New NIST Draft: Cybersecurity Framework Profile for AI (Cyber AI Profile) AI is now part of business-critical workflows — but most organizations still manage AI security like it’s “just another app.” NIST just published an Initial Preliminary Draft of NIST IR 8596 (Cyber AI Profile) that maps AI cybersecurity outcomes directly to CSF 2.0 — so teams can stop debating what to do and start aligning on how to do it. Here’s the part I like most: it frames AI cyber risk through 3 practical focus areas (and they reinforce each other): 1) SECURE — Secure AI system components Models, agents, prompts, training data, ML infrastructure, and supply chain. Because AI attack surfaces are often more dynamic + harder to verify than traditional software. 2) DEFEND — Use AI to improve cyber defense AI for triage, prioritization, investigation, response consistency, and decision support — with a clear reminder: maturity + oversight still matter. 3) THWART — Prepare for AI-enabled attacks From more convincing phishing/deepfakes to AI-generated malware and agentic workflows that speed up recon → exploitation → lateral movement. The real value: this draft pushes orgs toward treating AI like any critical system: ✅ governance • ✅ inventory • ✅ monitoring • ✅ change control • ✅ resilience 💬 Quick check: If an AI incident hit tomorrow, could you answer these 4 questions in under 10 minutes? What’s deployed (models/agents/tools) and where? What data touches it? What controls/monitoring exist today? What changed since last release? If you’re building an AI security program, this is worth bookmarking and mapping to your current CSF posture. #AISecurity #NIST #CybersecurityFramework #CSF #RiskManagement #SecurityGovernance #GenAI #LLMSecurity #SecurityArchitecture #GRC #DevSecOps
-
The landscape of AI security is shifting from theoretical risks to measurable vulnerabilities. While the industry has long relied on general LLM benchmarks like MMLU, these fail to address the specific "adversarial hygiene" required for enterprise-grade production environments. The latest Wiz Cyber Model Arena marks a great pivot toward standardized, red-team-centric performance metrics for the world’s leading Foundation Models. They offer up a transparent leaderboard based on the "Cyber-RAG-Bench," testing how models handle complex security reasoning and exploit generation across diverse scenarios. For CISOs and security architects, this isn't just about which model is "smarter", it’s about understanding the risk profile of the model you are integrating into your ecosystem. Choosing a model with poor security reasoning or weak guardrails against prompt injection can negate even the most robust infrastructure security. By moving away from opaque safety evaluations and toward open-source benchmarking, we can start thinking about AI risk with the same rigor as more traditional vulnerability management. As agentic workflows become the norm, the ability to predict how a model will respond under adversarial pressure is rapidly becoming a prerequisite for production. Live rankings and methodology here: https://lnkd.in/eDG44hbY #CyberSecurity #AISecurity #LLMSecurity #RedTeaming #GenAI #RiskManagement
-
🚀 AI Is Transforming Cybersecurity in 2026 — And We’re Just Getting Started This year is shaping up to be one of the most dynamic periods of change we’ve seen across the cybersecurity landscape. AI is no longer a distant enabler — it’s becoming woven into the core of our cyber tech stack, fundamentally reshaping how we defend, detect, and decide. Here are three areas that I am most excited about: AI‑Driven Decisions for Access Management The shift toward continuous, adaptive access is accelerating. AI-powered identity models can now evaluate real-time context, user behavior, and risk signals to make smarter, faster access decisions. This is helping organizations significantly reduce over‑permissioning while improving user experience — a balance we’ve been chasing for years. Smarter Incident Response & Fewer False Positives AI-driven detection and response systems are maturing fast. We’re seeing tools that not only correlate signals more effectively but also explain their reasoning with greater clarity, enabling analysts to trust and act with confidence. The reduction in false positives is creating more space for teams to focus on what matters: hunting, improving controls, and getting ahead of attackers. A New Era for Insider Threat Models Insider risk programs are being reimagined with AI that understands patterns — not just events. Instead of reacting to alerts, teams can now leverage behavioral baselines, anomaly detection, and predictive insights to identify risk earlier and intervene more constructively. It’s an evolution toward more proactive, more human‑centric insider threat management. As AI continues to integrate across the entire cyber ecosystem, one thing is clear - 2026 will be a defining year in how organizations operationalize intelligence at scale. What AI-driven transformations are you most excited about this year?
-
🔐 Can your AI survive a cyberattack? Most AI governance frameworks still treat cybersecurity like an afterthought—if it’s mentioned at all. That’s what makes this new World Economic Forum white paper so refreshing. It doesn’t just acknowledge cyber risk in AI systems—it builds a practical blueprint for managing it. 📘 What’s inside: “Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards” offers a 7-step risk framework for CISOs and business leaders, grounded in real operational realities. It includes threat trees, attack surfaces, and detailed controls—from prompt injection to model poisoning. 💡 Why this stands out: It breaks out of the siloed AI ethics vs. security debate and puts the enterprise lens back on AI risk. Instead of vague warnings, it gives precise guidance on: – How to govern shadow AI – When to “shift left, expand right” – Why resilience needs to be repeated across the lifecycle – What controls you need beyond baseline hygiene It’s clear, practical, and refreshingly honest: not every risk is solvable upfront—but you still need to own it. 🛠️ What you can do today: – Map your AI assets and interfaces – Add AI-specific risks to your cyber audit scope – Train teams on new failure modes (e.g. model evasion) – Start tracking residual risk alongside reward potential 📣 How is your org connecting AI governance with cyber resilience? Is it one team—or still two? Thanks to the teams at @World Economic Forum and @Oxford GCSCC for a resource that finally connects the dots. #AIGovernance #CyberSecurity #AIrisks #CISOTalk #AICompliance #RiskManagement === Did you like this post? Connect or Follow 🎯 Jakub Szarmach Want to see all my posts? Ring that 🔔. Sign up for my biweekly newsletter with the latest selection of AI Governance Resources (1.500+ subscribers) 📬.
-
𝗗𝗮𝘆 𝟭𝟮: 𝗟𝗲𝘃𝗲𝗿𝗮𝗴𝗲 𝗔𝗜/𝗚𝗲𝗻𝗔𝗜 𝘁𝗼 𝗳𝗶𝗴𝗵𝘁 𝗮𝗱𝘃𝗲𝗿𝘀𝗮𝗿𝗶𝗲𝘀 One of the most pressing challenges in cybersecurity today is the global talent shortage, with 𝗮𝗽𝗽𝗿𝗼𝘅𝗶𝗺𝗮𝘁𝗲𝗹𝘆 𝟯.𝟱 𝗺𝗶𝗹𝗹𝗶𝗼𝗻 𝘂𝗻𝗳𝗶𝗹𝗹𝗲𝗱 𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻𝘀 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝗲𝗱 𝗯𝘆 𝟮𝟬𝟮𝟱. This gap poses substantial risks, as unfilled roles lead to increased vulnerabilities, cyberattacks, data breaches, and operational disruptions. While there are learning paths like 𝗩𝗶𝘀𝗮’𝘀 𝗣𝗮𝘆𝗺𝗲𝗻𝘁𝘀 𝗖𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗰𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗽𝗿𝗼𝗴𝗿𝗮𝗺 to help aspiring cyber professionals upskill and build careers, Generative AI (GenAI) and Agentic AI offers a scalable solution by augmenting existing teams. Together, they can handle repetitive tasks, automate workflows, enhance incident triaging, and automate code fixes and vulnerability management, enabling smaller teams to scale and maintain robust security postures. Additionally, they enhance cybersecurity efforts by improving defenses while keeping humans in the loop to make critical, informed decisions. Here are few concept about GenAI in Cybersecurity that I’m particularly excited about: 1. Reducing Toil and Improving Team Efficiency GenAI can significantly reduce repetitive tasks, enabling teams to focus on strategic priorities: • GRC : Automates risk assessments, compliance checks, and audit-ready reporting. • DevSecOps: Integrates AI-driven threat modeling and vulnerability scanning into CI/CD pipelines. • IAM : Streamlines user access reviews, provisioning, and anomaly detection. 2. Extreme Shift Left GenAI can rapidly enhance “Secure-by-Design” into development processes by: • Detecting vulnerabilities during coding and providing actionable fixes. • Automating security testing, including fuzzing and penetration testing. 3. Proactive Threat Hunting and Detection Engineering GenAI can enhance threat hunting by: • Analyzing logs and sensor data to detect anomalies. • Correlating data to identify potential threats. • Predicting and detecting attack vectors to arm the sensors proactively. 4. Enabling SOC Automation Security Operations Centers (SOCs) can benefit from GenAI by: • Automating false positive filtering and alert triaging. • Speeds up analysis and resolution with AI-powered insights. • Allowing analysts to concentrate on high-value incidents and strategic decision-making. 𝟱. Enhancing Training and Awareness • Delivering tailored training simulations for developers and business users. • Generating phishing campaigns to educate employees on recognizing threats. In 2025, I am excited about the transformative opportunities that lie ahead. Our focus remains steadfast on innovation and resilience, particularly in leveraging the power of Gen/Agentic AI to enhance user experience, advance our defenses and further strengthen the posture of the payment ecosystem. #VISA #Cybersecurity #PaymentSecurity #12DaysofCybersecurity #AgenticAI