Integrating AI Into Existing Cybersecurity Frameworks

Explore top LinkedIn content from expert professionals.

Summary

Integrating AI into existing cybersecurity frameworks means adapting traditional security structures to protect, manage, and monitor AI systems as they become part of organizational operations. This approach recognizes that AI brings new risks—from unpredictable decision-making to expanded attack surfaces—and requires dedicated strategies to address them.

  • Update risk assessments: Include AI-specific considerations like model integrity, training data, and prompt security when reviewing your organization's cyber risks.
  • Establish oversight: Set clear roles, approval processes, and audit trails for AI-driven actions to ensure accountability and transparency.
  • Maintain continuous monitoring: Track AI performance, detect anomalies, and address model drift by extending monitoring beyond traditional system boundaries.
Summarized by AI based on LinkedIn member posts
Image Image Image
  • View profile for Frank Roppelt

    Chief Information Security Officer (CISO)

    2,751 followers

    Today, NIST released the initial preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (Cyber AI Profile), a community profile built on NIST CSF 2.0 to help organizations manage cybersecurity risk in an AI-driven world. A key section of this draft is Section 2.1, which introduces three Focus Areas that explain how AI and cybersecurity intersect in practice: 1. Securing AI System Components (Secure) AI systems introduce new assets that must be secured; models, training data, prompts, agents, pipelines, and deployment environments. This focus area emphasizes treating AI components as first-class cybersecurity assets, integrating them into governance, risk assessments, protection controls, and monitoring processes. It reinforces that AI risk should not be siloed from enterprise cybersecurity risk management. 2. Conducting AI-Enabled Cyber Defense (Defend) AI is not just something to protect, it is also a powerful defensive capability. This area focuses on using AI to enhance detection, analytics, automation, and response across security operations. At the same time, it recognizes the risks of over-reliance on automation, model integrity concerns, and the need for human oversight when AI supports security decision-making. 3. Thwarting AI-Enabled Cyber Attacks (Thwart) Adversaries are increasingly using AI to scale phishing, evade detection, and automate attacks. This focus area addresses how organizations must anticipate and counter AI-enabled threats by building resilience, improving detection of AI-driven attack patterns, and preparing for a rapidly evolving threat landscape where AI is weaponized. Why This Matters Together, Secure, Defend, and Thwart provide a practical structure for aligning AI initiatives with existing cybersecurity programs. By mapping AI-specific considerations to CSF 2.0 outcomes (Govern, Identify, Protect, Detect, Respond, Recover), the Cyber AI Profile helps organizations integrate AI security into familiar risk management practices. This is a preliminary draft, and NIST is seeking public feedback through January 30, 2026. If your organization is building, deploying, or defending with AI, now is the time to review and contribute. 🔗 https://lnkd.in/e-ETZXH8

  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    32,836 followers

    The Cybersecurity and Infrastructure Security Agency (CISA), together with other organizations, published "Principles for the Secure Integration of Artificial Intelligence in Operational Technology (OT)," providing a comprehensive framework for critical infrastructure operators evaluating or deploying AI within industrial environments. This guidance outlines four key principles to leverage the benefits of AI in OT systems while reducing risk: 1. Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle.  2. Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration 3. Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance.  4. Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans. The guidance recommends addressing AI-related risks in OT environments by: • Conducting a rigorous pre-deployment assessment. • Applying AI-aware threat modeling that includes adversarial attacks, model manipulation, data poisoning, and exploitation of AI-enabled features. • Strengthening data governance by protecting training and operational data, controlling access, validating data quality, and preventing exposure of sensitive engineering information. • Testing AI systems in non-production environments using hardware-in-the-loop setups, realistic scenarios, and safety-critical edge cases before deployment. • Implementing continuous monitoring of AI performance, outputs, anomalies, and model drift, with the ability to trace decisions and audit system behavior. • Maintaining human oversight through defined operator roles, escalation paths, and controls to verify AI outputs and override automated actions when needed. • Establishing safe-failure and fallback mechanisms that allow systems to revert to manual control or conventional automation during errors, abnormal behavior, or cyber incidents. • Integrating AI into existing cybersecurity and functional safety processes, ensuring alignment with risk assessments, change management, and incident response procedures. • Requiring vendor transparency on embedded AI components, data usage, model behavior, update cycles, cybersecurity protections, and conditions for disabling AI capabilities. • Implementing lifecycle management practices such as periodic risk reviews, model re-evaluation, patching, retraining, and re-testing as systems evolve or operating environments change.

  • View profile for Okan YILDIZ

    Global Cybersecurity Leader | Innovating for Secure Digital Futures | Trusted Advisor in Cyber Resilience

    83,231 followers

    🚨🧠 LLM Tools in Cybersecurity: The real risk isn’t the model — it’s the workflow. We’re moving into a new era of AI-powered security tooling. These systems don’t just answer questions anymore. They can: → plan investigations → chain actions → call APIs → trigger scans → modify configs → interact with real environments That’s not a chatbot. That’s an operator. What’s actually changing 👇 This isn’t just “AI in security.” It’s a shift in how work gets executed. ⚠️ Capability Compression Recon + analysis + scripting + reporting → now lives in a single interface. ➤ Defense: Treat AI workflows like privileged tooling. RBAC, monitoring, and controls should match admin-level access. ⚠️ Prompt → Action Bridge A prompt can now trigger real-world actions (tickets, scans, infra changes). ➤ Defense: • Approval gates for high-risk actions • Strict allowlists • Separate “analysis mode” vs “execution mode” ⚠️ Data Exposure Risk Sensitive logs, credentials, or internal diagrams can leak through prompts. ➤ Defense: • Default redaction • Data classification enforcement • Use controlled/self-hosted environments when needed ⚠️ Lack of Reproducibility AI gives answers… but can you explain how? ➤ Defense: • Full audit logging (prompts, tool calls, outputs) • Versioning • Change control for AI-driven actions ⚠️ Model & Tool Drift Same input → different output over time. ➤ Defense: • Version pinning • Evaluation datasets • Regression testing for workflows ⚠️ Dual-Use Risk Powerful assistants can be misused — intentionally or not. ➤ Defense: • Strong identity controls • Policy enforcement • Rate limiting • Environment isolation Practical rule 👇 Use AI for: ✅ summarizing findings ✅ triaging alerts ✅ mapping to frameworks (MITRE / OWASP) ✅ report generation ✅ checklist creation Be careful when: ⚠️ executing commands ⚠️ changing infrastructure ⚠️ accessing sensitive systems ⚠️ making compliance-impacting decisions Final thought If you deployed an AI security assistant today, could you answer: • Who used it? • What data was processed? • What actions were triggered? • What actually changed? If not — you don’t have an AI problem. You have a governance problem. 💬 Curious: Are you treating AI tools as helpers, or as operators with risk? #CyberSecurity #AISecurity #LLMSecurity #SecurityEngineering #DevSecOps #ThreatModeling #ZeroTrust #SecOps #Governance #AI #Infosec

    • +8
  • View profile for Nathaniel Alagbe CISA CISM CISSP CRISC CFE AAIA FCA

    IT Audit & GRC Leader | AI & Cloud Security | Cybersecurity | Transforming Risk into Boardroom Intelligence

    22,045 followers

    Dear AI and Cybersecurity Auditors, AI changes how risk enters your environment and expands your attack surface. Traditional cybersecurity controls no longer cover model behavior, training data, prompts, agents, and AI-driven decisions. This draft extends NIST CSF 2.0 into AI systems. It treats models, data, prompts, agents, and AI decisions as real cyber assets. It also addresses how attackers already use AI to scale speed, deception, and impact. Here is why this framework matters for security, risk, and audit leaders. 📌 AI expands the attack surface beyond infrastructure into training data, models, prompts, agents, and third-party AI services 📌 Governance shifts from IT ownership to enterprise accountability with clear risk ownership, oversight, and decision authority 📌 Traditional controls still apply, but AI requires added focus on model integrity, data provenance, output reliability, and human oversight 📌 The framework maps AI risk directly to CSF functions so teams avoid parallel AI security programs 📌 Defensive teams use AI to reduce alert fatigue, improve detection accuracy, and support faster incident response 📌 Adversaries already use AI for phishing, malware generation, social engineering, and automated attack orchestration 📌 Continuous monitoring extends beyond systems into model drift, hallucinations, and unexpected behavior 📌 Risk tolerance must account for AI failure modes, not only system outages or data loss 📌 Audit and assurance teams gain a structured way to test AI controls across Secure, Defend, and Thwart focus areas 📌 The profile supports assessment, control design, and executive reporting without adding unnecessary complexity AI security fails when teams treat AI as software. NIST IR 8596 reframes AI as a risk domain inside cybersecurity. If your organization builds, buys, or relies on AI, this profile gives you a practical path to govern, secure, and defend it with intent. #NIST #Cybersecurity #AIGovernance #AIRisk #AIControls #ITAudit #CyberRisk #AISecurity #GRC #CSF #CyberVerge ♻️ Share this with your team or repost so more professionals. 👉Follow Nathaniel Alagbe for more.

  • View profile for Razi R.

    ↳ Driving AI Innovation Across Security, Cloud & Trust | Senior PM @ Microsoft | O’Reilly Author | Industry Advisor

    13,612 followers

    AI is moving very quickly into every corner of enterprise systems, but most organizations still rely on controls designed before this shift. That gap creates uncertainty: how do you adapt traditional security and privacy frameworks to systems that generate, plan, or act in ways we cannot always predict? NIST just released Control Overlays for Securing AI Systems to begin answering that question. What it is about? The document proposes overlays, a way to extend the established SP 800-53 control catalog to AI contexts. Instead of building a new framework from scratch, NIST shows how existing controls can be tailored to cover AI-specific risks. Where overlays apply The concept draft includes overlays for generative models, predictive analytics, copilots and assistants, multi agent or autonomous systems, and for the AI development lifecycle itself. Each overlay explains how baseline controls like logging, access, testing, and assurance must shift when applied to AI. Practical insights NIST highlights that AI is not exempt from foundational security. For example, multi agent systems require controls for chaining actions and external tool use, copilots raise new privacy and memory isolation issues, and generative models demand rigorous testing against adversarial inputs. Importantly, red teaming and adversarial testing are treated as control requirements rather than optional practices. Who should take note • Security engineers integrating AI models into enterprise platforms • Product teams deploying copilots or autonomous agents with API and data access • CISOs and compliance officers mapping AI into existing governance structures • Risk management professionals who need to show regulators how AI risks are addressed Why it matters This approach gives security and compliance teams a path to integrate AI risks into the structures they already use, reducing the risk of treating AI as an ungoverned add on. It also helps avoid duplication by embedding AI security into the broader enterprise control environment.

  • View profile for Tommy Flynn

    💼 Cybersecurity Leader | AI & InfoSec Advocate | Cybersecurity Threat Intelligence | GRC | Lean Six Sigma Green Belt (NAVSEA) | Active Clearance | All views and opinions are my own.

    2,168 followers

    🔐 AI Governance Is No Longer Optional — It Must Be Integrated Into Cybersecurity Training & GRC Now As AI systems become embedded across enterprise security, threat detection, identity workflows, and automation pipelines, the risk surface is expanding faster than traditional controls can keep up. Effective AI governance must now be treated as a first-class component of cybersecurity programs—embedded directly into training, operational security, and GRC frameworks. Here’s how forward-leaning security teams are doing it: 🔎 1. Establish an AI Governance Framework Use structured governance models that mirror established security frameworks: AI risk classification: Identify AI systems, data flows, decision impact, and safety-critical components. Model lifecycle controls: Apply versioning, approval gates, drift monitoring, and performance validation. Security & privacy baselines: Enforce threat modeling, data minimization, PII controls, and red-team evaluations against prompt injection and model exploitation. 🛡 2. Integrate AI Threat Modeling Into Training Extend existing secure engineering and AppSec training to include: AI/ML-specific threat scenarios: Model poisoning, adversarial inputs, jailbreaks, training-data leakage. Secure prompt engineering: Guardrails, context restriction, least-privilege prompts, and API-level access management. Model behavior validation: Teach staff how to evaluate hallucination risk, output integrity, and system response boundaries. Supply chain considerations: Validate datasets, model sources, vendor controls, and licensing compliance. 📘 3. Embed AI Governance Into GRC Processes Treat AI systems like any other technology subject to governance, but with enhanced oversight: Policy Mapping: Align AI use with ISO 42001, NIST AI RMF, and existing enterprise security policies. AI Risk Register Entries: Document model usage, data categories, risk ratings, and compensating controls. Continuous Monitoring: Measure model drift, decision error rates, anomalous outputs, and access patterns. Control Families: Integrate AI-specific controls into your existing GRC stack—access control, data classification, audit logging, third-party risk, and model deployment workflows. 🧩 4. Build AI Governance Into Incident Response AI incidents require new playbooks: Model-driven incident categories: Output manipulation, model degradation, training data exposure, unauthorized fine-tuning. Forensic Support: Log prompts, context injection attempts, and model inference metadata. Rollback Mechanisms: Maintain approved model versions, data lineage tracking, and automated reversion paths. #Cybersecurity #AIGovernance #GRC #CyberRiskManagement #AIsecurity #InformationSecurity #SecurityEngineering #NISTAI #ISO42001 #ThreatModeling #CyberTraining #CISO #RiskAndCompliance #AIMaturity

  • View profile for Dr Joshua Scarpino

    Cybersecurity & AI Governance Executive | Founder & CEO, Assessed Intelligence | D.Sc. Cybersecurity, JM Business Law | NIST AI Safety Consortium | ForHumanity Fellow | Non-Profit Chairman | Girl Dad | Veteran

    4,656 followers

    I was reading through the OWASP Top 10 for Agentic Applications last week and have seen quite a few posts on this release. I keep coming back to the same conclusion when reading posts: AI does not introduce entirely new categories of risk, but it does require a material expansion of foundational security practices and a shifted approach to cover them adequately. In ASI02, for example, the mitigations around tool misuse, privilege abuse, and unintended execution highlight the focus on autonomy, identity, authorization, and runtime decision-making. These are not issues that can be addressed by AI governance in isolation or by traditional security controls applied after the fact. They exist precisely because agentic systems blur the boundary between decision-making, data access, and execution. Controls like least privilege, policy enforcement points, just-in-time credentials, action-level approvals, and continuous monitoring are no longer “security layers” around AI. They are critical to defining and constraining the agency itself. Without expanding and adapting these foundational practices, organizations will have exposure and, in this specific context, potential major risks inherent in agentic deployments. Separate governance models create gaps by design, siloed efforts, and ultimately risk. Effective risk management requires a single, integrated framework where AI and security controls are integrated, teams work together to address risk and exposure, and foundational expectations are established. How have you addressed the evolving AI risk within your organization? Is your organization simply preventing use? Do you have separate teams/individuals responsible for this, or have you implemented a foundational, unified approach to support organizational adoption? For reference: https://lnkd.in/e8_ykenb #cyberecurity #responsibleai #ariseframework

  • View profile for Jason Stanley

    Head of AI Research Deployment | Agent security, system-level evaluations, trustworthy AI | ServiceNow

    8,077 followers

    NIST just released a draft Cybersecurity Framework Profile for AI and it’s open for public comment. Here are my initial thoughts on how to make it stronger. I'll expand on this soon with a longer-form blog post. Quick primer: a 'profile' is NIST’s way of tailoring its Cybersecurity Framework to a specific domain. This helps for coordination. It maps AI risk into the same structure many security and audit teams already use (govern / identify / protect / detect / respond / recover). That shared language helps builders and buyers talk about expectations and evidence without inventing a new framework from scratch. Initial thoughts on how to improve: 1) Be technology-agnostic, but not pattern-agnostic. Avoiding vendor specifics is wise, but avoiding patterns risks too much fog. Risk concentrates in recurring 'vehicle classes' of AI systems: RAG, tool-calling systems, browser-connected assistants, code-execution helpers. The draft would be more actionable if it named a small set of classes and went one level deeper on the control concepts that reliably matter: action authorization, context isolation, non-human identity/keys, and supply-chain checks for connector/protocol layers (e.g., MCP-style servers). Other mature standards do this: payment card security standard (PCI DSS); industrial control security standard (IEC 62443); NIST’s zero-trust architecture guidance (SP 800-207); OWASP web app verification standard (ASVS). 2) Move from testing-occurred controls to a stronger focus on rigor Many compliance programs (e.g., SOC 2, ISO 27001) are great at confirming a process exists, but they rarely say what good testing looks like. For AI, rigor can’t be only more attacks or clearer attack-success metrics. Attacks usually arrive while the system is mid-task, under load, or on off-distribution inputs -- state really matters. A goalie defends the same shot with different success depending on chaos, fatigue, and pressure; agents and AI systems are no different. The draft should encourage evaluations that combine adversarial attempts with realistic operating context (long-horizon tasks, noisy inputs, live connectors, recovery logic), plus clear acceptance criteria and auditable evidence (test artifacts, failures, fixes). 3) Make procurement easier A concise buyer packet would help: model/system cards, a software bill of materials plus an 'AI bill' for data/models/connectors, what evals were run, what the evals covered, incident-response commitments, action-level logging / traceability. This is an important step toward shared expectations. Now’s the moment to comment so it evolves into something both builders and buyers can use. Link to the draft in the comments. #aisecurity #cybersecurity #trustworthai

  • View profile for Jatin Arora

    Managing Director - Cyber Strategy & GRC at Arcova formerly MorganFranklin Cyber

    7,329 followers

    In the landscape of AI, robust governance, risk, and security frameworks are essential to manage various risks. However, a silent yet potent threat looms: Prompt Injection. Prompt Injection exploits the design of large language models (LLMs), which treat instructions and data within the same context window. Natural language sanitization is nearly impossible, highlighting the need for architectural defenses. If these defenses are not implemented correctly, they pose significant threats to an organization's reputation, compliance, and bottom line. For instance, a chatbot designed to handle client queries 24/7 could be manipulated into revealing company secrets, generating offensive content, or connecting with internal systems. To address these challenges, a Defense-in-Depth approach is crucial for implementing AI use cases: 1. Zero-Trust for AI: Assume every prompt is hostile and establish mechanisms to validate all inputs. 2. Prompt Firewalls: Implement pattern recognition for both incoming prompts and outgoing responses. 3. Architectural Separation: Ensure no LLM has direct access to databases and APIs. It should communicate with your data without direct interaction, with an intermediate layer that includes all necessary security controls. 4. AI Bodyguards: Leverage specialized security AI models to screen prompts and responses for malicious intent. 5. Continuous Stress Testing: Engage "red teams" to actively attempt to breach your AI's defenses, identifying weaknesses before real attackers do. The future of AI is promising, but only if it is secure. Consider how you are fortifying your AI adoption. #riskmanagement #AIGovernance #cybersecurity

  • View profile for Dave Schroeder, PhD

    🇺🇸 Strategist, Cryptologist, Cyber Warfare Officer, Space Cadre, Intelligence Professional. Personal account. Opinions = my own. Sharing ≠ agreement/endorsement.

    26,162 followers

    Principles for the Secure Integration of Artificial Intelligence in Operational Technology Since the public release of ChatGPT in November 2022, artificial intelligence (AI) has been integrated into many facets of human society. For critical infrastructure owners and operators, AI can potentially be used to increase efficiency and productivity, enhance decision-making, save costs, and improve customer experience. Despite the many benefits, integrating AI into operational technology (OT) environments that manage essential public services also introduces significant risks—such as OT process models drifting over time or safety-process bypasses—that owners and operators must carefully manage to ensure the availability and reliability of critical infrastructure. This guidance—co-authored by the Cybersecurity and Infrastructure Security Agency (CISA) and Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) in collaboration with the National Security Agency’s Artificial Intelligence Security Center (NSA AISC), the Federal Bureau of Investigation (FBI), the Canadian Centre for Cyber Security (Cyber Centre), the German Federal Office for Information Security (BSI), the Netherlands National Cyber Security Centre (NCSC-NL), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom National Cyber Security Centre (NCSC-UK), hereafter referred to as the “authoring agencies”—provides critical infrastructure owners and operators with practical information for integrating AI into OT environments. This guidance outlines four key principles critical infrastructure owners and operators can follow to leverage the benefits of AI in OT systems while reducing risk: 1. Understand AI. Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle. 2. Consider AI Use in the OT Domain. Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration. 3. Establish AI Governance and Assurance Frameworks. Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance. 4. Embed Safety and Security Practices Into AI and AI-Enabled OT Systems. Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans. The authoring agencies encourage critical infrastructure owners and operators to review this guidance and action the principles so they can safely and securely integrate AI into OT systems. https://lnkd.in/gVtgEWMM

Explore categories