EU AI Regulation Impact

Explore top LinkedIn content from expert professionals.

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,677 followers

    The Irish Government has just announced plans to introduce the Regulation of Artificial Intelligence Bill in its Spring 2025 legislative programme, a pivotal piece of legislation aimed at giving full effect to the European Union’s Artificial Intelligence Act (EU Regulation 2024/1689). Even though the AI Act as a regulation has direct effect, this move is set to shape the national regulatory framework for AI governance in Ireland and establish national enforcement mechanisms in line with the EU’s approach. At the heart of the bill is the designation of Ireland’s National Competent Authorities: the entities that will be responsible for enforcing compliance with the AI Act. These authorities will oversee risk classification, conduct market surveillance, and impose penalties for violations. Given Ireland’s role as the EU base for major technology firms including Google, Anthropic, Meta, and TikTok, the effectiveness of its enforcement regime will be closely scrutinised across the EU and beyond. The Irish Government’s approach will be particularly significant due to the country’s track record in regulating the digital sector. Ireland’s Data Protection Commission (DPC) has wielded considerable influence over EU-wide enforcement of the GDPR, given the presence of multinational tech firms within the state. The DPC was designated as one of ireland’s nine fundamental rights authorities under the AI Act in November 2024. The bill will include provisions for penalties, though details remain unspecified. Under the EU AI Act, non-compliance can result in fines of up to €35 million or 7% of a company’s global annual turnover, whichever is higher. For Ireland, the challenge will be ensuring its enforcement framework has sufficient resources and expertise to oversee AI systems deployed within its jurisdiction. Tech industry leaders and legal experts will be closely monitoring how Ireland structures its national framework. The AI Act imposes strict obligations on high-risk AI applications, including those used in healthcare, banking, and recruitment. Companies will be required to maintain transparency, conduct impact assessments, and ensure that their AI systems do not lead to unlawful discrimination or harm. Ireland’s legislative initiative comes at a time of growing regulatory scrutiny over AI’s impact on society, innovation, and human rights. The AI Act represents the world’s most comprehensive attempt to regulate artificial intelligence, at a time other jurisdictions such as the USA are moving in the opposite regulatory direction. The Regulation of Artificial Intelligence Bill is still in its early stages, at the “Heads in Preparation” point. In the Irish legislative process, the Heads of a Bill serve as a blueprint for the eventual legislation. As Ireland moves toward full implementation of the AI Act, the government’s decisions on AI oversight will have significant implications for businesses, consumers, and the broader EU regulatory landscape.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,463,045 followers

    Separate reports by the publicity firm Edelman and Pew Research (links in orig text, below) show that Americans, and more broadly large parts of Europe and the western world, do not trust AI and are not excited about it. Despite the AI community’s optimism about the tremendous benefits AI will bring, we should take this seriously and not dismiss it. The public’s concerns about AI can be a significant drag on progress, and we can do a lot to address them. According to Edelman’s survey, in the U.S., 49% of people reject the growing use of AI, and 17% embrace it. In China, 10% reject it and 54% embrace it. Pew’s data also shows many other nations much more enthusiastic than the U.S. about AI adoption. Positive sentiment toward AI is a huge national advantage. On the other hand, widespread distrust of AI means: - Individuals will be slow to adopt it. For example, Edelman’s data shows that, in the U.S., those who rarely use AI cite Trust (70%) more than lack of Motivation and Access (55%) or Intimidation by the technology (12%) as an issue. - Valuable projects that need societal support will be stymied. For example, local protests in Indiana brought down Google’s plan to build a data center there. Hampering construction of data centers will hurt AI’s growth. Communities do have concerns about data centers beyond the general dislike of AI; I will address this in a later letter. - Populist anger against AI raises the risk that laws will be passed that hamper AI development. To be clear, all of us working in AI should look carefully at both the benefits and harmful effects of AI (such as deepfakes polluting social media and biased or inaccurate AI outputs misleading users), speak truthfully about both benefits and harms, and work to ameliorate problems even as we work to grow the benefits. But hype about AI’s danger has done real damage to trust in our field. Much of this hype has come from leading AI companies that aim to make their technology seem extraordinarily powerful by, say, comparing it to nuclear weapons. Unfortunately, a significant fraction of the public has taken this seriously and thinks AI could bring about the end of the world. The AI community has to stop self-inflicting these wounds and work to win back society’s trust. Where do we go from here? First, to win people’s trust, we have a lot of work ahead to make sure AI broadly benefits everyone. “Higher productivity” is often viewed by general audiences as a codeword for “my boss will make more money,” or worse, layoffs. As amazing as ChatGPT is, we still have a lot of work to do to build applications that make an even bigger positive impact on people’s lives. I believe providing training to people will be a key piece of the puzzle. DeepLearning.AI will continue to lead the charge on AI training, but we will need more than this. [Truncated for length. Full text, with links: https://lnkd.in/gUgMDMGS ]

  • View profile for Montgomery Singman
    Montgomery Singman Montgomery Singman is an Influencer

    Managing Partner @ Radiance Strategic Solutions | xSony, xElectronic Arts, xCapcom, xAtari

    27,570 followers

    On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation 

  • View profile for Valerio Velardo

    AI Music + Audio Consultant | Founder @ Transparent Audio | Creator of The Sound of AI | Professor of Gen AI Music @ MTG

    16,776 followers

    I’ve seen commentators on LinkedIn thrilled by Trump’s decision to cut the executive order on safe AI on his first day in office. Most of the praise I saw came from Europeans who lauded the move as a boost for AI innovation, freed from the shackles of regulation. I get it, excessive regulation stifles innovation. Yes, the EU has over-regulated AI. Having spent more than a decade in the AI startup trenches, I know this first hand. But the world isn’t black and white. We aren’t faced with a binary choice between choking innovation with rules and having no rules at all. There’s a scale of grays to explore. We need to be careful what we wish for. A total lack of regulation in AI leads to serious consequences for individuals, businesses, and society. Here’s a possible nightmare scenario. A bad actor—say, an oligarch or a nation-state—launches a propaganda campaign to manipulate public opinion. They use video/audio deepfakes and AI bots to amplify their message. Without a legal shield like the AI Act requiring generative AI providers to verify content origins and authenticity, misinformation spreads unchecked, eroding public trust and ultimately threatening democracy. Instead of scrapping regulation altogether, we should improve what we have. I suggest we strike a balance: enabling innovation while keeping the harmful effects of AI—the ones we don’t want to see in our society—in check. What do you folks think? #AI #AIEthics #Trump #Innovation #AIAct #EU #AIRegulation

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,127 followers

    This report provides the first comprehensive analysis of how the EU AI Act regulates AI agents, increasingly autonomous AI systems that can directly impact real-world environments. Our three primary findings are: 1. The AI Act imposes requirements on the general-purpose (AI GPAI) models underlying AI agents (Ch. V) and the agent systems themselves (Ch. III). We assume most agents rely on GPAI models with systemic risk (GPAISR) Accordingly, the applicability of various AI Act provisions depends on (a) whether agents proliferate systemic risks under Ch. V (Art. 55), and (b) whether they can be classified as high-risk systems under Ch. III. We find that (a) generally holds, requiring providers of GPAISRs to assess and mitigate systemic risks from AI agents. However, it is less clear whether AI agents will in all cases qualify as (b) high-risk AI systems, as this depends on the agent's specific use case. When built on GPAI models, AI agents should be considered high-risk GPAI systems, unless the GPAI model provider deliberately excluded high-risk uses from the intended purposes for which the model may be used. 2. Managing agent risks effectively requires governance along the entire value chain. The governance of AI agents illustrates the “many hands problemˮ, where accountability is obscured due to the unclear allocation of responsibility across a multi-stakeholder value chain. We show how requirements must be distributed along the value chain, accounting for the various asymmetries between actors, such as the superior resources and expertise of model providers and the context-specific information available to downstream system providers and deployers. In general, model providers must build the fundamental infrastructure, system providers must adapt these tools to their specific contexts, and deployers must adhere to and apply these rules during operation. 3. The AI Act governs AI agents through four primary pillars: risk assessment, transparency tools, technical deployment controls, and human oversight. We derive these complementary pillars by conducting an integrative review of the AI governance literature and mapping the results onto the EU AI Act. Underlying these pillars, we identify 10 sub-measures for which we note specific requirements along the value chain, presenting an interdependent view of the obligations on GPAISR providers, system providers, and system deployers. By Amin Oueslati, Robin Staes-Polet at The Future Society Read: https://lnkd.in/e6865zWq

  • View profile for Martyn Redstone

    Head of Responsible AI & Industry Engagement @ Warden AI | Ethical AI • AI Bias Audit • AI Policy • Workforce AI Literacy | UK • Europe • Middle East • Asia • ANZ • USA

    21,407 followers

    Yesterday, the European Commission released two proposals that will materially affect how HR and TA teams use AI and manage people data: The Digital Omnibus Regulation and the AI Act Simplification Amendment. 1. High-Risk AI Timeline Adjustments The fixed August 2026 enforcement date for high-risk AI no longer applies. Obligations will now begin once the Commission confirms supporting tools (standards, guidance) are available, followed by a six-month transition for HR-related high-risk systems. A new final deadline requires compliance no later than December 2027. This creates a more realistic adoption window for HR technology and recruitment AI. 2. Key GDPR Changes for HR The Digital Omnibus updates GDPR to support modern people analytics and AI use: • Clearer definition of personal data, reducing uncertainty when using aggregated or pseudonymised data. • Permission for residual special-category data in AI training under strict safeguards. • Confirmed allowance for biometric verification when controlled by the employee. • Harmonised DPIA requirements across the EU. • Data breach reporting extended to 96 hours, with a unified EU reporting portal. 3. Streamlined Data and AI Governance Several data laws are consolidated into a clearer Data Act, simplifying vendor oversight and data portability. The AI Act amendment also introduces more practical obligations, expanded simplifications for SMEs and small mid-caps, stronger EU-level oversight, and support for using sensitive data to detect or correct bias in hiring and workforce systems. What This Means for HR and TA: The proposals provide clearer rules, reduced administrative burden, a more achievable timeline for high-risk AI, and better support for fair and compliant AI in recruitment and workforce management. Both the Digital Omnibus and the AI Act amendment are Commission proposals and are not yet law. They now enter the EU’s Ordinary Legislative Procedure, where the European Parliament and the Council will review, amend and negotiate the texts before jointly adopting them. Once approved and published in the Official Journal, each Regulation will enter into force and begin applying on the dates specified in the final legislation. If you’d like a tailored breakdown for your organisation or HR tech stack, feel free to get in touch.

  • View profile for Barb Hyman
    Barb Hyman Barb Hyman is an Influencer

    Founder & CEO Sapia.ai. Building a fairer world through ethical AI

    22,635 followers

    Big news out of the EU today and it matters for anyone building or buying AI in HR. Two major shifts: a simplification to GDPR, and a timing shake up for the EU AI Act. 1. GDPR is finally catching up with AI reality. The proposal now says organisations may rely on “legitimate interests” as the legal basis for processing personal data for AI related purposes, as long as they still meet all GDPR safeguards. Translated? This gives employers a far clearer pathway to use candidate data to train and improve AI models without jumping through unnecessary hoops. For anyone building data driven, fair by design systems (like us at Sapia.ai), this is overdue alignment between regulation and how responsible AI actually works. 2. The EU AI Act deadline is shifting because the standards aren’t ready. The high risk system obligations were meant to land in August 2026. The EU has now acknowledged that the relevant ISO/IEC standards are nowhere near on schedule. So the plan is changing: ➡️ Once the standards are finally published, organisations will get six months to comply ➡️ The extension is capped at December 2027 Not surprising. Building standards for safe, measurable AI is complex, and rushing it would only create chaos. 3. The AI reforms are being fast tracked. Interestingly, the AI updates were submitted as a standalone package, separate from the broader digital omnibus. Meaning: 🔹 AI Act changes will move faster 🔹 GDPR simplification will take longer 🔹 Organisations will have to track two timelines, not one This is Brussels acknowledging that AI governance can’t wait for slow digital reform cycles. What’s next? Both packages now enter trilogue negotiations between the European Parliament and the Council. Expect months, not weeks, of back and forth before anything is final. For now: watch this space.

  • View profile for Stefan Oelrich
    Stefan Oelrich Stefan Oelrich is an Influencer

    President Pharmaceuticals @ Bayer AG | Member of the Board of Management

    31,802 followers

    With a new pharmaceutical legislation in the making, Europe finds itself at a crossroads. This is a critical time for the innovation-driven pharmaceutical industry in Europe. Europe’s R&D investments have significantly fallen behind other regions in the world, most notably the United States and China. In today’s era of breakthrough innovation, the question is not if medical progress will happen, but rather where it will happen, given that the global competition for cutting-edge science and new investments is fierce. Since 2014, only 56% of new drug innovations have been approved in the EU, compared to 73% in the US. This means that a quarter of the new medicines approved in the US are not approved in the EU and are thus not available to European patients. With a new European Parliament now elected and an EU legislative framework for medicines currently undergoing its biggest revision in decades, we are at a crossroads. In one direction a continuing - or indeed worsening - of this trend and in the other, a future-proof EU legislation which values, incentivizes, and rewards #innovation, benefiting #patients and ensuring the long-term #competitiveness of the European pharmaceutical industry. The revision of the EU legislative framework for medicines is an effort, which we, as the pharmaceutical industry, fully support. Initiated to increase patient access to medicines and foster an environment conducive to R&D in Europe, the legislative proposal - reducing Regulatory Data Protection from eight to six years for example - unfortunately fell short of addressing the needs for a thriving innovation-based pharmaceutical industry in and for Europe. The amendments that we have seen more recently by the (former) European Parliament are an improvement, but more is needed and there is much we can learn from others. Several governments around the world have made the life sciences a strategic priority, which results in venture capital funding for biotech, in fast clearances to start clinical development, quicker approvals and a market that is willing to pay for innovation. In Europe, by contrast, rather than considering innovation in the life sciences an investment we often viewit solely as a cost.  With a new 5-year term for the European Parliament and the Commission ahead of us, the new cohort of decision makers have the opportunity (and responsibility) to (re)set the direction and shape the future of research, development, and manufacturing for decades to come. And with the right legislation and ecosystem in place, we believe that the potential of medical innovation is limitless.  Together with Lars Fruergaard Jørgensen and David Loew, I remain committed to working with the new European Parliament, the EU Member States, and other stakeholders involved to change the trajectory of Europe for the better and strive for a more competitive, healthier, and stronger Europe.

  • View profile for Matt Brittin CBE

    Gap year student, part time athlete. Tech for good. Ex-President of Google EMEA.

    63,080 followers

    As Mario Draghi’s report released today demonstrates, the EU is falling behind global rivals because of limited innovation. Since 2019, the EU has created over 100 pieces of digital regulation. Whether you’re a technology startup or a small retailer, regulatory complexity is a minefield. Developing, launching or just using technology is harder in Europe than elsewhere in the world. Of course, “anything goes” is not an option and rules are required - but the EU is holding itself back at a time where it could be thriving. Our research with Public First shows that generative AI alone could add €1.2 trillion to the European economy. Much of Google’s innovation is led from Europe. We work with talented European entrepreneurs, businesses and innovators every day and see first-hand the benefits that the single market could yield for them. But a new approach is needed if Europe is not to miss the moment. Here’s what needs to change: 1️⃣ Shift from regulatory growth to economic growth: Europe doesn’t just create a huge number of regulations related to digital society - the regulations they create are often conflicting, untested and inconsistently implemented. The explosion of rules makes it almost impossible for Europe to create and nurture the next tech unicorns. Draghi is right that the EU now needs to focus on enabling innovation: promoting the use of digital technologies to innovate and drive through breakthrough advances. 2️⃣ Invest in R&D: To compete in AI, the EU needs to prioritise research and development, working with the private sector to incentivise it and make funding more accessible. The EU currently lags behind the US, Israel, South Korea, Japan, the UK and China on R&D investment. Without the right incentives to develop and roll out new technology, Europe is stifling its talent. 3️⃣ Build the right infrastructure: AI breakthroughs are only possible with the right computing technologies and data centres - plus the renewable energy to run them. So the EU needs to allocate more funding towards financing such infrastructure, as well as incentivising and enabling the private sector to do the same. 4️⃣ Prioritise skills & education: People will need support to seize the benefits of AI in their work and life. A revitalised European Skills Agenda should put skills and education at the centre, while AI should be added to school curriculums. Google wants to help Europe seize the benefits of innovation. Over the last decade, we’ve worked hand in hand with Governments to build new technology responsibly; train over 13 million Europeans in digital skills; and support over €179 billion in economic activity across the EU. As a European, I’m proud of this work, but I know there’s much more to do. Read Draghi’s report here: https://lnkd.in/epBxtymw

  • View profile for Michał Choiński

    AI Research and Voice | Driving meaningful Change | IT Lead | Digital and Agile Transformation | Speaker | Trainer | DevOps ambassador

    11,925 followers

    AI regulation isn’t settling, it’s reacting. And the reaction? Fragmented, global and and driven by public tension. Europe: The landmark AI Act is already under review. Why? Industry pushback. Now, the EU is signalling it may ease compliance and reduce red tape. United States: The proposed “AI Diffusion Rule” was pulled just before rollout. The focus has shifted from enforcement to diplomacy. China: Governance is tightening. The details remain unclear, but the intent is unmistakable: more control. It might seem like regulation is shaped only by politics, policy, and industry pressure. But now add the ethical and public concern layer. You don’t need expert analysis. Just read the headlines: →The New York Times is suing OpenAI over training data and copyright boundaries. →A GDPR complaint accuses ChatGPT of generating false, defamatory information. →A U.S. federal judge ordered OpenAI to preserve all ChatGPT outputs, marking a legal shift in how AI content is treated. Three regions. Three agendas. But one emerging pattern: → Public tension surfaces first, whether political, economic, or ethical. → Legal systems scramble to respond. → Governance becomes the tool to contain the risk. So what does this mean for leaders building with AI? If your strategy skips ethical alignment, regulation will catch you off guard. Ethics builds trust. And to navigate today’s grey areas and stay ready for shifting governance, you need to build with adaptability, documentation, and decision traceability in mind. Ethics is the why. Governance is the how. And both are becoming non-negotiable. 👇 How are you preparing for this dual front, ethical accountability and regulatory complexity? Sources in comments

Explore categories