MASSIVE AI REGULATION NEWS!!! The European AI Office has published the first draft of its General-Purpose AI Code of Practice, marking a major step in AI governance. This draft forms part of the EU’s strategy to create a comprehensive framework for artificial intelligence, guiding providers on compliance, accountability, and societal benefit. Following consultation with nearly 1,000 stakeholders, the final version will be released in May 2025. Article 55 of the AI Act outlines the obligations for providers of general-purpose AI models with systemic risk, including standardised model evaluations, risk assessments, serious incident tracking, and cybersecurity measures. Providers can use codes of practice (defined in Article 56) to demonstrate compliance with these obligations until harmonised standards are issued. Article 56 enables the AI Office to facilitate Union-level codes of practice covering these obligations, aiming for collaborative development with relevant stakeholders. These codes must be detailed, regularly monitored, and adaptable to technological changes, ultimately ensuring a high standard of compliance across the EU. The draft focuses on four core objectives aligned with the EU AI Act. First, it offers clear compliance pathways by detailing how providers can document and validate adherence to the Act, particularly for advanced general-purpose AI models. Second, it fosters transparency across the AI value chain, ensuring downstream developers understand model functionalities and limitations. Copyright compliance is another critical area, with provisions to safeguard creators’ rights while balancing innovation. Finally, the Code establishes a framework for continuous monitoring of models with systemic risks, from development to deployment. Providers of general-purpose AI models bear unique responsibilities under the Code. These include maintaining comprehensive technical documentation, implementing acceptable use policies to prevent misuse, and complying with EU copyright laws, including the Text and Data Mining exception. Proportional compliance measures are introduced for small and medium enterprises to support innovation while ensuring accountability. Providers must assess and mitigate these risks through measures tailored to each model’s risk profile, including rigorous testing, safety reports, and incident response protocols. Governance structures extend accountability to executive levels, ensuring organisational oversight of AI risks. Providers must also implement safeguards to protect proprietary assets and manage systemic risks effectively. The Code mandates continuous evidence collection and lifecycle-based risk assessments, covering all stages of development and deployment. Public transparency is emphasised, with providers required to publish safety frameworks and compliance information, including text and data mining practices. Standardised documentation templates aim to ease compliance, particularly for SMEs.
EU AI Initiatives
Explore top LinkedIn content from expert professionals.
-
-
Yesterday, the European Commission released two proposals that will materially affect how HR and TA teams use AI and manage people data: The Digital Omnibus Regulation and the AI Act Simplification Amendment. 1. High-Risk AI Timeline Adjustments The fixed August 2026 enforcement date for high-risk AI no longer applies. Obligations will now begin once the Commission confirms supporting tools (standards, guidance) are available, followed by a six-month transition for HR-related high-risk systems. A new final deadline requires compliance no later than December 2027. This creates a more realistic adoption window for HR technology and recruitment AI. 2. Key GDPR Changes for HR The Digital Omnibus updates GDPR to support modern people analytics and AI use: • Clearer definition of personal data, reducing uncertainty when using aggregated or pseudonymised data. • Permission for residual special-category data in AI training under strict safeguards. • Confirmed allowance for biometric verification when controlled by the employee. • Harmonised DPIA requirements across the EU. • Data breach reporting extended to 96 hours, with a unified EU reporting portal. 3. Streamlined Data and AI Governance Several data laws are consolidated into a clearer Data Act, simplifying vendor oversight and data portability. The AI Act amendment also introduces more practical obligations, expanded simplifications for SMEs and small mid-caps, stronger EU-level oversight, and support for using sensitive data to detect or correct bias in hiring and workforce systems. What This Means for HR and TA: The proposals provide clearer rules, reduced administrative burden, a more achievable timeline for high-risk AI, and better support for fair and compliant AI in recruitment and workforce management. Both the Digital Omnibus and the AI Act amendment are Commission proposals and are not yet law. They now enter the EU’s Ordinary Legislative Procedure, where the European Parliament and the Council will review, amend and negotiate the texts before jointly adopting them. Once approved and published in the Official Journal, each Regulation will enter into force and begin applying on the dates specified in the final legislation. If you’d like a tailored breakdown for your organisation or HR tech stack, feel free to get in touch.
-
2 August 2025: The EU AI Act is now live. Today, the EU AI Act officially begins to apply to general-purpose AI (GPAI) systems, including LLMs, multimodal AI, and other foundational technologies. What changes today? If you’re building, deploying, or integrating foundation models (proprietary and open-source) in the EU, or using vendors who do, you’re now responsible for: 🔹 Transparency: Documenting capabilities, intended use cases, known limitations, and risks. And making this info publicly available 🔹 Model governance: Detailing the data governance, evaluation methods, robustness and safety measures of the model 🔹 Copyright compliance: Demonstrating lawful use of training data and enabling content owners to opt out of future training Who’s aligned and who isn’t? As of today: 🔹 Signed the EU AI Code of Practice: Microsoft, Google, OpenAI, Anthropic, Mistral 🔹 Partial signer: xAI (safety chapter only) 🔹 Not signed: Meta, Apple, Baidu, Alibaba, Tencent 🔹 Signed the AI Pact (but not yet Code): SAP, Salesforce, IBM, Samsung, Lenovo, Telefónica Over 200 companies have signed the AI Pact, committing to voluntarily implement AI governance measures ahead of full enforcement in 2026. What if your AI vendor didn’t sign? Not signing the Pact or Code increases risk. The EU AI Office has been clear: “Non-signatories will face more inspections, less regulatory guidance, and a higher burden of proof once the Act is enforced.” If your vendor (or internal model team) has not signed, Boards and CxOs should: 1️⃣ Request a compliance roadmap ▫️ Have they mapped their obligations under the Act? ▫️ Can they provide transparency documentation, risk mitigation plans, and copyright compliance evidence? 2️⃣ Assess third-party model exposure ▫️ What foundation models are integrated into your stack? ▫️ Do you use APIs from unsignaled vendors? 3️⃣ Prepare AI compliance audits ▫️ Begin documentation and model-level governance 4️⃣ Integrate AI into your enterprise risk framework ▫️ Assign ownership for AI compliance at the C-level. ▫️ Ensure traceability of AI systems across Legal and Procurement What to do now ✔️ Vendors: sign the Code of Practice or publish your alignment measures ✔️ Users: demand AI assurance documentation from all providers ✔️ Boards: treat AI compliance like data privacy: measurable and monitored quarterly ✔️ Risk/Compliance leaders: use this enforcement date to trigger vendor reviews The EU AI Act is live, inspections are next, and there’s no firewall between vendor’s silence and board’s liability. You don’t need to be first. But you need to be prepared. #EUAIAct #AIGovernance #AI #Boardroom #Stratedge
-
"This white paper offers a comprehensive overview of how to responsibly govern AI systems, with particular emphasis on compliance with the EU Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework for AI. It also outlines the evolving risk landscape that organizations must navigate as they scale their use of AI. These risks include: ▪ Ethical, social, and environmental risks – such as algorithmic bias, lack of transparency, insufficient human oversight, and the growing environmental footprint of generative AI systems. ▪ Operational risks – including unpredictable model behavior, hallucinations, data quality issues, and ineffective integration into business processes. ▪ Reputational risks – resulting from stakeholder distrust due to errors, discrimination, or mismanaged AI deployment. ▪ Security and privacy risks – encompassing cyber threats, data breaches, and unintended information disclosure. To mitigate these risks and ensure AI is used responsibly, in this white paper we propose a set of governance recommendations, including: ▪ Ensuring transparency through clear communication about AI systems’ purpose, capabilities, and limitations. ▪ Promoting AI literacy via targeted training and well-defined responsibilities across functions. ▪ Strengthening security and resilience by implementing monitoring processes, incident response protocols, and robust technical safeguards. ▪ Maintaining meaningful human oversight, particularly for high-impact decisions. ▪ Appointing an AI Champion to lead responsible deployment, oversee risk assessments, and foster a safe environment for experimentation. Lastly, this white paper acknowledges the key implementation challenges facing organizations: overcoming internal resistance, balancing innovation with regulatory compliance, managing technical complexity (such as explainability and auditability), and navigating a rapidly evolving and often fragmented regulatory landscape" Agata Szeliga, Anna Tujakowska, and Sylwia Macura-Targosz Sołtysiński Kawecki & Szlęzak
-
Usually I would review an #AI reading on Sundays. But today I’d be remiss not to celebrate the occasion: Art. 4 #AILiteracy became applicable as one of the first requirements of the EU #AIAct. But what does AI literacy entail? The goal of this post is to demystify the requirement and address frequently asked questions in a non-legalese way: ➡️ What does Art. 4 AI Act require? ✅ Art. 4 AI Act requires that providers and deployers of AI systems take reasonable measures to ensure their employees and other persons handling AI systems on their behalf have a sufficient level of AI literacy. ➡️ How does the AI Act define AI literacy? ✅ Art. 3(56) AI Act defines AI literacy as the ability to make an informed deployment of AI systems and to gain awareness about their opportunities, risks, and possible harms. ➡️ Do I have to implement Art. 4 AI Act? ✅ If your organization develops or uses AI in the EU, it has to implement Art. 4 AI Act. ➡️ Can AI literacy be implemented in a standardized way? ✅ Unfortunately not. Art. 4 AI Act is context-specific. In other words, when implementing AI literacy, you have to take into consideration the training and technical background of your employees, the context in which your AI systems will be used, and the perspective of persons who will be affected by your AI systems. ➡️ What fines are associated with non-compliance? ✅ There are no direct fines associated with non-compliance with Art. 4 AI Act. ➡️ Why should I bother implementing Art. 4 AI Act in that case? ✅ First, because it's a legal requirement regardless. Second, because you don't want to be in the awkward position where your AI systems cause harm and it then turns out you did not train your employees in accordance with the law. Third, the likelihood that your AI systems do cause harm is inevitably lower if your employees know how to use them effectively and ethically. ➡️ I have not yet started implementing Art. 4 AI Act. Am I in trouble? ✅ Not yet. In an AI Pact workshop on AI literacy in December 2024, the European AI Office suggested enforcement of AI literacy would not begin before August 2025. ➡️ How should I approach implementing Art. 4 AI Act? ✅ There is no one-size-fits-all solution to AI literacy. However, to get started, you may consider a three-step approach: 1️⃣ Ensure a baseline level of AI literacy among employees. 2️⃣ Create a mapping of AI systems including your role (provider or deployer) and the level of risk (high-risk or not). 3️⃣ Invest in further role- and context-specific trainings depending on 2️⃣. ➡️ I still have questions about AI literacy. What can I do? ✅ You're welcome to register for one of my upcoming webinars here (https://lnkd.in/deY_rMUp). The AI Office will also be running another webinar on AI literacy to which you can navigate here (https://lnkd.in/du3AiMqe). 💡 Bottom line: Don't panic. But do get started. And: We at ada are happy to help (https://lnkd.in/dRYy5dsF). Happy #AILiteracy Day!
-
The G7 Toolkit for Artificial Intelligence in the Public Sector, prepared by the OECD.AI and UNESCO, provides a structured framework for guiding governments in the responsible use of AI and aims to balance the opportunities & risks of AI across public services. ✅ a resource for public officials seeking to leverage AI while balancing risks. It emphasizes ethical, human-centric development w/appropriate governance frameworks, transparency,& public trust. ✅ promotes collaborative/flexible strategies to ensure AI's positive societal impact. ✅will influence policy decisions as governments aim to make public sectors more efficient, responsive, & accountable through AI. Key Insights/Recommendations: 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐍𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬: ➡️importance of national AI strategies that integrate infrastructure, data governance, & ethical guidelines. ➡️ different G7 countries adopt diverse governance structures—some opt for decentralized governance; others have a single leading institution coordinating AI efforts. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 & 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 ➡️ AI can enhance public services, policymaking efficiency, & transparency, but governments to address concerns around security, privacy, bias, & misuse. ➡️ AI usage in areas like healthcare, welfare, & administrative efficiency demonstrates its potential; ethical risks like discrimination or lack of transparency are a challenge. 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐆𝐮𝐢𝐝𝐞𝐥𝐢𝐧𝐞𝐬 & 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬 ➡️ focus on human-centric AI development while ensuring fairness, transparency, & privacy. ➡️Some members have adopted additional frameworks like algorithmic transparency standards & impact assessments to govern AI's role in decision-making. 𝐏𝐮𝐛𝐥𝐢𝐜 𝐒𝐞𝐜𝐭𝐨𝐫 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 ➡️provides a phased roadmap for developing AI solutions—from framing the problem, prototyping, & piloting solutions to scaling up and monitoring their outcomes. ➡️ engagement + stakeholder input is critical throughout this journey to ensure user needs are met & trust is built. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬 𝐨𝐟 𝐀𝐈 𝐢𝐧 𝐔𝐬𝐞 ➡️Use cases include AI tools in policy drafting, public service automation, & fraud prevention. The UK’s Algorithmic Transparency Recording Standard (ATRS) and Canada's AI impact assessments serve as examples of operational frameworks. 𝐃𝐚𝐭𝐚 & 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞: ➡️G7 members to open up government datasets & ensure interoperability. ➡️Countries are investing in technical infrastructure to support digital transformation, such as shared data centers and cloud platforms. 𝐅𝐮𝐭𝐮𝐫𝐞 𝐎𝐮𝐭𝐥𝐨𝐨𝐤 & 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: ➡️ importance of collaboration across G7 members & international bodies like the EU and Global Partnership on Artificial Intelligence (GPAI) to advance responsible AI. ➡️Governments are encouraged to adopt incremental approaches, using pilot projects & regulatory sandboxes to mitigate risks & scale successful initiatives gradually.
-
The European Commission has published its guidelines on the AI Act's rules for general-purpose AI models. This is a key piece of administrative guidance, setting out the legal interpretation of provisions for which the Commission will be the sole enforcer. Some elements were already anticipated to the European AI Board, others are new. The compute threshold for what qualifies as a GPAI model was increased from 10²² to 10²³ floating-point operations per second (FLOPs), which for the Commission reflects the amount of compute that is typically required to train models with 1 billion parameters. In addition, the model must be able to produce text, audio, images or videos. For downstream modifications, the threshold was set at one third of the computational resources used to train the original model. The Commission clarified that, if this threshold is passed, it will be considered as if a new model will be placed on the market, and the downstream modifier will have to comply with the AI Act immediately as the grandfathering clause will not apply. The guidelines also explain how the open source exemption will be applied, including a series of practices that would not be deemed acceptable such as placing restrictions on usage, requiring additional licensing or restraining the public availablity of parameters, including the model's weights. Finally, an important clarification on the grandfathering clause for models already on the EU market that will have until August 2027 to comply with the AI Act. In these cases, providers are not expected to “conduct retraining or unlearning of models, where it is not possible to do this for actions performed in the past, where some of the information about the training data is not available, or where its retrieval would cause the provider disproportionate burden.” My full write-up for MLex.
-
The EU just said "no brakes" on AI regulation. Despite heavy pushback from tech giants like Apple, Meta, and Airbus, the EU pressed forward last week with its General-Purpose AI Code of Practice. Here's what's coming: → General-purpose AI systems (think GPT, Gemini, Claude) need to comply by August 2, 2025. → High-risk systems (biometrics, hiring tools, critical infrastructure) must meet regulations by 2026. → Legacy and embedded tech systems will have to comply by 2027. If you’re a Chief Data Officer, here’s what should be on your radar: 1. Data Governance & Risk Assessment: Clearly map your data flows, perform thorough risk assessments similar to those required under GDPR, and carefully document your decisions for audits. 2. Data Quality & Bias Mitigation: Ensure your data is high-quality, representative, and transparently sourced. Responsibly manage sensitive data to mitigate biases effectively. 3. Transparency & Accountability: Be ready to trace and explain AI-driven decisions. Maintain detailed logs and collaborate closely with legal and compliance teams to streamline processes. 4. Oversight & Ethical Frameworks: Implement human oversight for critical AI decisions, regularly review and test systems to catch issues early, and actively foster internal AI ethics education. These new regulations won’t stop at Europe’s borders. Like GDPR, they're likely to set global benchmarks for responsible AI usage. We're entering a phase where embedding governance directly into how organizations innovate, experiment, and deploy data and AI technologies will be essential.
-
On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation
-
The UK government has just unveiled its response to the “AI Opportunities Action Plan”, and it’s brimming with ambition to position the UK as a global leader in artificial intelligence. 🌍💡 Here’s what caught my attention—and why this matters: 1️⃣ Supercharging AI Infrastructure. Imagine a 20x boost in sovereign compute capacity by 2030. That’s not just numbers; it’s a foundation for groundbreaking innovation. With new supercomputing facilities and “AI Growth Zones” (like the one in Culham), the UK is creating an ecosystem where ideas can thrive and scale faster than ever. 2️⃣ Building an AI-Ready Workforce. AI isn’t just about machines—it’s about people. The government is doubling down on “scholarships, fellowships, and diversity initiatives” to ensure that everyone, regardless of background, has the chance to shape the future of AI. This is a call to action for businesses: invest in your teams and embrace lifelong learning. 3️⃣ Unlocking Data Potential. Data is the fuel of AI, and the UK plans to launch a “National Data Library”, unlocking public sector data for innovation while safeguarding privacy. For startups, researchers, and enterprises, this is a treasure trove of opportunities waiting to be explored. 4️⃣ Safe, Responsible AI Development. With initiatives like the “AI Safety Institute”, the UK is taking a proactive stance on ensuring AI systems are safe, ethical, and aligned with human values. This isn’t just about regulation—it’s about trust. And trust is what will drive adoption at scale. 5️⃣ Scaling AI Adoption Across Sectors: From public services to private enterprise, the government is piloting scalable AI solutions that solve real-world problems. Think smarter healthcare, efficient public services, and stronger supply chains—all powered by AI. 💬 What does this mean? Whether you’re an entrepreneur, researcher, policymaker, or simply curious about AI, this action plan signals massive opportunities ahead. Collaboration will be key—between government, academia, and industry—to turn this vision into reality. 🌟 The UK is setting a high bar for what it means to lead in AI globally. What do you think about this plan? #ArtificialIntelligence #Innovation #UKLeadership #AIPolicy #DigitalTransformation