AI Governance Practices

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    241,729 followers

    𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝗺𝗶𝘀𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗼𝗼𝗱 𝘁𝗼𝗽𝗶𝗰𝘀 𝗶𝗻 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲. Because most people explain it from the inside out: policies, councils, standards, stewardship. But the business does not buy any of that. The business buys outcomes: → trustworthy KPIs → vendor and partner data you can actually use → faster financial close → fewer reporting escalations → smoother M&A integration → AI you can deploy without creating risk debt Most AI programs fail for boring reasons: nobody owns the data, quality is unknown, access is messy, accountability is missing. 𝗦𝗼 𝗹𝗲𝘁’𝘀 𝘀𝗶𝗺𝗽𝗹𝗶𝗳𝘆 𝗶𝘁. 𝗗𝗮𝘁𝗮 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗶𝘀 𝗳𝗼𝘂𝗿 𝘁𝗵𝗶𝗻𝗴𝘀: → ownership → quality → access → accountability 𝗔𝗻𝗱 𝗶𝘁 𝗯𝗲𝗰𝗼𝗺𝗲𝘀 𝘃𝗲𝗿𝘆 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝘁𝗵𝗶𝗻𝗸 𝗶𝗻 𝟰 𝗹𝗮𝘆𝗲𝗿𝘀: 1. Data Products (what the business consumes) → a named dataset with an owner and SLA → clear definitions + metric logic → documented inputs/outputs and intended use → discoverable in a catalog → versioned so changes don’t break reporting 2. Data Management (how products stay reliable) → quality rules + monitoring (freshness, completeness, accuracy) → lineage (where it came from, where it’s used) → master/reference data alignment → metadata management (business + technical) → access controls and retention rules 3. Data Governance (who decides, who is accountable) → data ownership model (domain owners, stewards) → decision rights: who can change KPI definitions, thresholds, and sources → issue management: triage, escalation paths, resolution SLAs → policy enforcement: what’s mandatory vs optional → risk and compliance alignment (auditability, approvals) 4. Data Operating Model (how you scale across the enterprise) → domain-based setup (data mesh or not, but clear domains) → operating cadence: weekly issue review, monthly KPI governance, quarterly standards → stewardship at scale (roles, capacity, incentives) → cross-domain decision-making for shared metrics → enablement: templates, playbooks, tooling support If you want to start fast: Pick the 10 metrics that run the business. Assign an owner. Define decision rights + escalation. Then build the data products around them. ↓ 𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝘀𝘁𝗮𝘆 𝗮𝗵𝗲𝗮𝗱 𝗮𝘀 𝗔𝗜 𝗿𝗲𝘀𝗵𝗮𝗽𝗲𝘀 𝘄𝗼𝗿𝗸 𝗮𝗻𝗱 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀, 𝘆𝗼𝘂 𝘄𝗶𝗹𝗹 𝗴𝗲𝘁 𝗮 𝗹𝗼𝘁 𝗼𝗳 𝘃𝗮𝗹𝘂𝗲 𝗳𝗿𝗼𝗺 𝗺𝘆 𝗳𝗿𝗲𝗲 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: https://lnkd.in/dbf74Y9E

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,668 followers

    I’m so happy to see this! Yesterday, the ISO published a new standard, ISO/IEC 42001:2023 for AI Management Systems. My suspicion is that it will become as important to the AI world as ISO/IEC 27001 arguably became the most important standard for information security management systems. The standard provides a comprehensive framework for establishing, implementing, maintaining, and improving an artificial intelligence management system within organisations. It aims to ensure responsible AI development, deployment, and use, addressing ethical implications, data quality, and risk management. This set of guidelines is designed to integrate AI management with organisational processes, focusing on risk management and offering detailed implementation controls. Key aspects of the standard include performance measurement, emphasising both quantitative and qualitative outcomes, and the importance of AI systems’ effectiveness in achieving intended results. It mandates conformity to requirements and systematic audits to assess AI systems. The standard also highlights the need for thorough assessment of AI's impact on society and individuals, stressing data quality to meet organisational needs. Organisations are required to document controls for AI systems and rationalise their decisions, underscoring the role of governance in ensuring performance and conformance. The standard calls for adapting management systems to include AI-specific considerations like ethical use, transparency, and accountability. It also requires continuous performance evaluation and improvement, ensuring AI systems' benefits and safety. ISO/IEC 42001:2023 aligns closely with the EU AI Act. The AI Act classifies AI systems into prohibited and high-risk categories, each with distinct compliance obligations. ISO/IEC 42001:2023's focus on ethical AI management, risk management, data quality, and transparency aligns with these categories, providing a pathway for meeting the AI Act’s requirements. The AI Act's prohibitions include specific AI systems like biometric categorisation and untargeted scraping for facial recognition. The standard may help guide organisations in identifying and discontinuing such applications. For high-risk AI systems, the AI Act mandates comprehensive risk management, registration, data governance, and transparency, which the ISO/IEC 42001:2023 framework could support. It could assist providers of high-risk AI systems in establishing risk management frameworks and maintaining operational logs, ensuring non-discriminatory, rights-respecting systems. ISO/IEC 42001:2023 may also aid users of high-risk AI systems in fulfilling obligations like human oversight and cybersecurity. It could potentially assist in managing foundation models and General Purpose AI (GPAI), necessary under the AI Act. This new standard offers a comprehensive approach to managing AI systems, aiding organisations in developing AI that respects fundamental rights and ethical standards.

  • View profile for Allan Lerberg Jørgensen

    Head of the OECD Centre for Responsible Business Conduct

    7,391 followers

    Today the OECD - OCDE launched its new Due Diligence Guidance for Responsible AI - the most comprehensive government-backed AI risk management framework available. AI has the potential to transform society for the better, enhance productivity, and solve complex challenges. But for these benefits to materialise, AI needs to be trustworthy. Whether your company is investing in, developing, or using AI, this guidance provides you with an authoritative, internationally agreed framework to: ➡️Implement and demonstrate due diligence relevant for your company's position in the AI value chain ➡️Support safe and responsible AI innovation, investment and uptake ➡️Navigate and simplify compliance with domestic and industry AI risk management frameworks The new guidance is backed by all the OECD’s member countries, the EU, and 17 partner governments. It is based on and fully consistent with the OECD Guidelines for Multinational Enterprises and the OECD AI Principles. You can find the new guidance here: https://brnw.ch/21x074j And do read the accompanying blog post by the OECD's Barbara Bijelic and Rashad Abelson: The OECD’s new responsible AI guidance: A compass for businesses in a complex terrain - OECD.AI #OECDAI #IndiaAIImpactSummit2026 #ResponsibleAI Sara Rendtorff-Smith Audrey Plonk, Ulrik Vestergaard Knudsen Pam Wood Alan Krill David C. Turnbull Felipe HENRÍQUEZ PALMA Aini Suzana Ariffin John Morrison Caleb Orr Office of the Under Secretary of State for Economic Affairs (E)

  • View profile for Victoria Beckman

    Associate General Counsel - Cybersecurity & Privacy

    32,833 followers

    The National Institute of Standards and Technology (NIST) has released a draft of its “Cybersecurity Framework Profile for Artificial Intelligence” (open for public comment until Jan 30, 2026) to help organizations think about how to strategically adopt AI while addressing emerging cybersecurity risks that stem from AI’s rapid advance. Building on the #NIST Cybersecurity Framework 2.0, the Cyber AI Profile translates well-established risk management concepts into AI-specific cybersecurity considerations, offering a practical reference point as organizations integrate AI into critical systems and confront AI-enabled threats. The Cyber AI Profile centers on three focus areas: • Securing AI systems: identifying cybersecurity challenges when integrating AI into organizational ecosystems and infrastructure. • Conducting AI-enabled cyber defense: identifying opportunities to use AI to enhance cybersecurity, and understanding challenges when leveraging AI to support defensive operations. • Thwarting AI-enabled cyberattacks: building resilience to protect against new AI-enabled threats. The Profile complements existing NIST frameworks (CSF, AI RMF, RMF) by prioritizing AI-specific cybersecurity outcomes rather than creating a standalone regime.

  • View profile for Willem Koenders

    Global Leader in Data Strategy

    16,491 followers

    Over the past 10+ years, I’ve had the opportunity to author or contribute to over 100 #datagovernance strategies and frameworks across all kinds of industries and organizations. Every one of them had its own challenges, but I started to notice something: there’s actually a consistent way to approach #data governance that seems to work as a starting point, no matter the region or the sector. I’ve put that into a single framework I now reuse and adapt again and again. Why does it matter? Getting this framework in place early is one of the most important things you can do. It helps people understand what data governance is (and what it isn’t), sets clear expectations, and makes it way easier to drive adoption across teams. A well-structured framework provides a simple, repeatable visual that you can use over and over again to explain data governance and how you plan to implement it across the organization. You’ll find the visual attached. I broke it down into five core components: 🔹 #Strategy – This is the foundation. It defines why data governance matters in your org and what you’re trying to achieve. Without it, governance will be or become reactive and fragmented. 🔹 #Capability areas – These are the core disciplines like policies & standards, data quality, metadata, architecture, and more. They serve as the building blocks of governance, making sure that all the essential topics are covered in a clear and structured way. 🔹 #Implementation – This one is a bit unique because most high-level frameworks leave it out. It’s where things actually come to life. It’s about defining who’s doing what (roles) and where they’re doing it (domains), so governance is actually embedded in the business, not just talked about. This is where your key levers of adoption sit. 🔹 #Technology enablement – The tools and platforms that bring governance to life. From catalogs to stewardship platforms, these help you scale governance across teams, systems, and geographies. 🔹 #Governance of governance – Sounds meta, but it’s essential. This is how you make sure the rest of the framework is actually covered and tracked — with the right coordination, forums, metrics, and accountability to keep things moving and keep each other honest. In next weeks, I’ll go a bit deeper into one or two of these. For the full article ➡️ https://lnkd.in/ek5Yue_H

  • View profile for Amanda Bickerstaff
    Amanda Bickerstaff Amanda Bickerstaff is an Influencer

    Educator | AI for Education Founder | Keynote | Researcher | LinkedIn Top Voice in Education

    89,911 followers

    Common Sense Media recently released a comprehensive risk assessment of AI teacher assistants/lesson planning tools. Their findings reveal that while these tools promise increased productivity and creative support, they're also creating "invisible influencers" that could fundamentally undermine educational quality. Unlike GenAI foundation model chatbots, these tools are specifically designed for instructional planning and classroom use and are rapidly being adopted across districts. Key Concerns from their report: • "Invisible Influencers" in Student Learning: AI-generated content directly shapes what students learn through potentially biased perspectives and historical inaccuracies that teachers may miss; evidence also shows these tools suggest different approaches and responses based on student race/gender • “Outsourced Thinking" Problem: Tools make it dangerously easy to push unreviewed AI instructional content straight to classrooms, while novice teachers lack experience to spot subtle errors and biasses • High-Stakes Outputs: IEP and behavior plan generators create official-looking documents that could impact student educational trajectories even though these plans should be human-generated (and in the case of IEP goals are mandated to be human generated) • Undermining High-Quality Instructional Materials: Without proper integration, these tools fragment learning and can undermine coherent, research-backed curricula Recommendations from the report: • Experienced educator oversight required for all AI-generated educational content • Clear district policies and guidelines for AI teacher assistant implementation • Integration with existing high-quality curricula rather than replacement of established materials • Robust teacher training on identifying bias and evaluating AI outputs • Careful oversight of real-time AI feedback tools that interact directly with students We'd also recommend foundational AI literacy for teachers before they begin using GenAI teacher assistants, so that they are aware of the potential limitations. While AI teacher assistants aren't inherently problematic, they require the same careful implementation and oversight we'd expect for any tool that directly impacts student learning. The potential for enhanced productivity is real, but so are the risks to educational equity and quality. This report underscores the urgent need for GenAI EdTech tool makers to provide evidence of how their tools mitigate these issues along with evidence-based policies and professional development to help educators navigate AI tools responsibly. All of which underline how important AI Literacy is for the 2025-2026 school year. Link in the comments to check out the full report. Also check out our 5 Questions to Ask GenAI EdTech Providers resource in the comments if you are planning to implement any of these tools in your school or district. #AIinEducation #ailiteracy #Education #K12 AI for Education

  • View profile for Bertalan Meskó, MD, PhD
    Bertalan Meskó, MD, PhD Bertalan Meskó, MD, PhD is an Influencer

    The Medical Futurist, Author of Your Map to the Future, Global Keynote Speaker, and Futurist Researcher

    366,385 followers

    BREAKING! The FDA just released this draft guidance, titled Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, that aims to provide industry and FDA staff with a Total Product Life Cycle (TPLC) approach for developing, validating, and maintaining AI-enabled medical devices. The guidance is important even in its draft stage in providing more detailed, AI-specific instructions on what regulators expect in marketing submissions; and how developers can control AI bias. What’s new in it? 1) It requests clear explanations of how and why AI is used within the device. 2) It requires sponsors to provide adequate instructions, warnings, and limitations so that users understand the model’s outputs and scope (e.g., whether further tests or clinical judgment are needed). 3) Encourages sponsors to follow standard risk-management procedures; and stresses that misunderstanding or incorrect interpretation of the AI’s output is a major risk factor. 4) Recommends analyzing performance across subgroups to detect potential AI bias (e.g., different performance in underrepresented demographics). 5) Recommends robust testing (e.g., sensitivity, specificity, AUC, PPV/NPV) on datasets that match the intended clinical conditions. 6) Recognizes that AI performance may drift (e.g., as clinical practice changes), therefore sponsors are advised to maintain ongoing monitoring, identify performance deterioration, and enact timely mitigations. 7) Discusses AI-specific security threats (e.g., data poisoning, model inversion/stealing, adversarial inputs) and encourages sponsors to adopt threat modeling and testing (fuzz testing, penetration testing). 8) And proposed for public-facing FDA summaries (e.g., 510(k) Summaries, De Novo decision summaries) to foster user trust and better understanding of the model’s capabilities and limits.

  • View profile for Ian Romero

    Ultra runner, family man, COO. I help businesses grow through better systems, stronger teams, and smarter use of technology

    2,694 followers

    Claude.ai just announced that their Microsoft 365 connector is now available on EVERY plan, including free and personal accounts. That means ANY of your end users with a free Claude account can now connect it directly to your company's Microsoft 365 environment and start pulling in emails, files, spreadsheets, whatever they have access to. That should make you uncomfortable. Because unless your tenant requires admin approval for third-party app connections, any employee can enable this on their own. No ticket, no approval, no one in leadership even knows it happened. And now sensitive client data is sitting inside a platform you didn't evaluate, didn't approve, and don't control. A public AI model is potentially learning from your sensitive data, and almost definitely storing it. This isn't a Claude problem. Every major AI platform is racing to build connectors into your business tools, and every one of them is a potential data exposure event if you're not ready. Here's what I'd recommend doing as soon as possible: - Lock down third-party app permissions. Require admin approval for all app connections in your Microsoft 365 tenant. If you're not sure whether this is on, assume it isn't. - Audit your environment. Do you know where your sensitive data lives and who can access it? Most companies find out the hard way that employees are over-permissioned, and AI makes that exponentially more dangerous because it makes finding and extracting data faster than ever. - Communicate and educate. Most employees aren't being reckless, they just don't know this is a problem. Send a simple message this week: don't connect any AI tools to company systems without approval. Then start building a real AI use policy, even a one-pager. - Review your client agreements. If you handle sensitive client data, your contracts probably don't address AI processing yet. Close that gap before a client asks about it. This isn't about being anti-AI. Every new AI capability is a new governance question, and most businesses aren't asking it fast enough. At the same time, it's imperative that companies start preparing for AI integration because it is inevitable for those that want to move forward with technology in a meaningful way. Have questions? Shoot me a message. Client or not, I'm happy to chat more if I can help!

  • View profile for Barbara C.

    Board & Executive Advisor | Strategic transformation, growth and governance | AI, Cloud, IoT | C-level Executive | ex-Amazon Web Services, Orange

    14,966 followers

    Europe just defined how AI must be secured On 15 Jan, the European Telecommunications Standards Institute (ETSI) published a standard, EN 304 223, defining baseline cybersecurity requirements for AI models and systems. ➡️ A common set of AI cybersecurity controls, usable across jurisdictions, vendors, supply chains. Why this matters now Traditional cybersecurity was built for software & networks. AI changes the attack surface: ▫️ training data can be poisoned ▫️ models can be manipulated or obfuscated ▫️ prompts can be indirectly injected ▫️ behaviour can drift in invisible ways ➡️ EN 304 223 explicitly names these risks, treating them as security failures. How this takes effect EN 304 223 is already being pulled into procurement processes, security questionnaires, internal audits, vendor due diligence, insurance reviews. With the EU AI Act, high-risk AI systems will need to demonstrate compliance through conformity assessment either via internal control with robust technical documentation, or through assessment by a notified body. ➡️ EN 304 223 is the operational “how” that law and auditors will rely on. The real breakthrough: lifecycle security The standard defines 13 principles and 72 trackable requirements, organised across 5 phases of the AI system lifecycle: 1️⃣ secure design 2️⃣ secure development 3️⃣ secure deployment 4️⃣ secure maintenance 5️⃣ secure end of life ➡️ Retraining a model = redeploying a system from a security standpoint. AI security becomes a continuous operational discipline. Accountability made operational EN 304 223 assigns accountability across 3 technical roles: ✔️ developers ✔️ system operators ✔️ data custodians ➡️ AI risk lives between teams. This standard makes ownership explicit. The target: production AI EN 304 223 applies to deep neural networks and GenAI models already embedded in products, services, and operational decisions. Academic or research environments are excluded. ➡️ This standard is about AI that is live, scaled, and consequential, particularly in finance, healthcare, and critical infrastructure. What “compliance” means Complying with legal, audit, procurement, and insurance expectations using EN 304 223 as evidence: mapping controls across the lifecycle and ownership across roles. What Boards and executives should do now 1️⃣ Mandate an AI inventory: What AI is live, where, doing what, using which data pipelines, supplied by whom. 2️⃣ Assign named accountability across the lifecycle: Align to the standard’s role logic per system. 3️⃣ Require an AI security evidence pack per high-impact system, mapped across its lifecycle. 4️⃣ Decide your assurance route early. For high-risk systems plan for internal control vs notified body assessment. The bigger signal EU is turning AI security into auditable infrastructure. Trustworthy AI is becoming a standard of execution. For companies operating globally, proof of AI security is becoming the baseline. #AI #GenAI #AIGovernance #AISecurity #Boardroom

  • View profile for Oliver Patel, AIGP, CIPP/E, MSc
    Oliver Patel, AIGP, CIPP/E, MSc Oliver Patel, AIGP, CIPP/E, MSc is an Influencer

    Head of Enterprise AI Governance @ AstraZeneca | Trained thousands of professionals on AI governance, AI literacy & the EU AI Act.

    49,269 followers

    Global AI Law Snapshot 🇪🇺🇨🇳🇺🇸 As the global AI race heats up, take stock of the 3 main players. This snapshot focuses on laws which a) apply across the whole jurisdiction and b) apply to companies developing & using AI. ↪️ If you want my full comparative breakdown, including a pdf pack of high-res charts covering all 13 themes, comment 'AI' and I will ping it to you soon! Comprehensive AI law 🇪🇺 ✅ AI Act applies across EU 🇨🇳 ❌ National AI law in development 🇺🇸 ❌ No comprehensive federal AI law Narrow AI laws 🇪🇺 ✅ Digital Services Act, Product Liability Directive etc. 🇨🇳 ✅ Deep Synthesis Regulations, Generative AI Services Measures etc. 🇺🇸 ✅ National AI Initiative Act, Removing Barriers to American AI Leadership etc. Regional or local laws 🇪🇺 ❌ AI Act creates harmonised legal regime 🇨🇳 ✅ Regional laws in Shenzhen & Shanghai 🇺🇸 ✅ AI laws in California, Colorado, Utah etc. Technical standards 🇪🇺 ❌ CEN/CENELEC technical standards in development 🇨🇳 ✅ TC260 published standard on generative AI security 🇺🇸 ✅ NIST AI Risk Management Framework Promoting AI innovation 🇪🇺 ✅ AI Act regulatory sandboxes & SME support 🇨🇳 ✅ Strategy to be the global AI leader by 2030 🇺🇸 ✅ New Executive Order strongly prioritises AI innovation Trade and/or export controls 🇪🇺 ✅ Restrictions on export of dual use technology 🇨🇳 ✅ Updated export control regulations restrict AI related exports 🇺🇸 ✅ Restrictions on exports of advanced chips & model weights Prohibited AI 🇪🇺 ✅ AI practices prohibited (e.g., emotional recognition in the workplace) 🇨🇳 ✅ Prohibitions on which AI systems can be used in public facing applications 🇺🇸 ❌ Although various AI uses would be illegal, there are no explicit prohibitions High-risk AI 🇪🇺 ✅ Various AI systems classified as high-risk, including AI used in recruitment 🇨🇳 ✅ Generative AI systems for public use considered high-risk 🇺🇸 ❌ No specific high-risk AI systems in U.S. federal law AI system approval 🇪🇺 ✅ 3rd party conformity assessment required for certain high-risk AI systems 🇨🇳 ✅ Government approval required before public release of LLMs 🇺🇸 ✅ FDA approval required for AI medical devices Development requirements 🇪🇺 ✅ Extensive requirements for high-risk AI system development 🇨🇳 ✅ Detailed requirements for development of public facing generative AI 🇺🇸 ❌ No explicit AI development requirements in U.S. federal law Transparency & disclosure 🇪🇺 ✅ Extensive requirements in AI Act 🇨🇳 ✅ Content labelling required for deepfakes 🇺🇸 ✅ FTC enforces against unfair & deceptive AI use Pubic registration of AI 🇪🇺 ✅ Public database for high-risk AI systems 🇨🇳 ✅ Central algorithm registry for certain AI systems 🇺🇸 ❌ No general requirements to register AI systems AI literacy requirements 🇪🇺 ✅ AI Act requires organisations to implement AI literacy 🇨🇳 ❌ No corporate AI literacy requirements, but schools must teach AI 🇺🇸 ❌ No corporate AI literacy requirements

Explore categories