Challenges of AI Adoption

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    241,791 followers

    𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗮𝗻 𝗔𝗜 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝗮𝗻𝘆, 𝘆𝗼𝘂 𝗳𝗶𝗿𝘀𝘁 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗮 𝘀𝗼𝗹𝗶𝗱 𝗱𝗮𝘁𝗮 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝗲𝗻𝗳𝗼𝗿𝗰𝗲 𝘀𝘁𝗿𝗶𝗰𝘁 𝗱𝗮𝘁𝗮 𝗵𝘆𝗴𝗶𝗲𝗻𝗲. Getting your house in order is the foundation for delivering on any AI ambition. The MIT Technology Review — based on insights from 205 C-level executives and data leaders — lays it out clearly: 𝗠𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗱𝗼 𝗻𝗼𝘁 𝗳𝗮𝗰𝗲 𝗮𝗻 𝗔𝗜 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. 𝗧𝗵𝗲𝘆 𝗳𝗮𝗰𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗶𝗻 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗮𝗻𝗱 𝗿𝗶𝘀𝗸 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁. Therefore, many firms are still stuck in pilots, not production. Changing that requires strong data foundations, scalable architectures, trusted partners, and a shift in how companies think about creating real value with AI. Because pilots are easy, BUT scaling AI across the enterprise is hard. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: ⬇️ 1. 95% 𝗼𝗳 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗮𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝗔𝗜 — 𝗯𝘂𝘁 76% 𝗮𝗿𝗲 𝘀𝘁𝘂𝗰𝗸 𝗮𝘁 𝗷𝘂𝘀𝘁 1–3 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀:   ➜ The gap between ambition and execution is huge. Scaling AI across the full business will define competitive advantage over the next 24 months. 2. 𝗗𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗹𝗶𝗾𝘂𝗶𝗱𝗶𝘁𝘆 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗯𝗼𝘁𝘁𝗹𝗲𝗻𝗲𝗰𝗸𝘀: ➜ Without curated, accessible, and trusted data, no AI strategy can succeed — no matter how powerful the models are. 3. 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝗮𝗿𝗲 𝘀𝗹𝗼𝘄𝗶𝗻𝗴 𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 — 𝗮𝗻𝗱 𝘁𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗴𝗼𝗼𝗱 𝘁𝗵𝗶𝗻𝗴:   ➜ 98% of executives say they would rather be safe than first. Trust, not speed, will win in the next AI wave. 4. 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗲𝗱, 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀-𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗔𝗜 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 𝘄𝗶𝗹𝗹 𝗱𝗿𝗶𝘃𝗲 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝘃𝗮𝗹𝘂𝗲:  ➜ Generic generative AI (chatbots, text generation) is table stakes. True differentiation will come from custom, domain-specific applications. 5. 𝗟𝗲𝗴𝗮𝗰𝘆 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗮𝗿𝗲 𝗮 𝗺𝗮𝗷𝗼𝗿 𝗱𝗿𝗮𝗴 𝗼𝗻 𝗔𝗜 𝗮𝗺𝗯𝗶𝘁𝗶𝗼𝗻𝘀:  ➜ Firms sitting on fragmented, outdated infrastructure are finding that retrofitting AI into legacy systems is often more costly than building new foundations. 6. 𝗖𝗼𝘀𝘁 𝗿𝗲𝗮𝗹𝗶𝘁𝗶𝗲𝘀 𝗮𝗿𝗲 𝗵𝗶𝘁𝘁𝗶𝗻𝗴 𝗵𝗮𝗿𝗱: ➜ From GPUs to energy bills, AI is not cheap — and mid-sized companies face the biggest barriers. Smart firms are building realistic ROI models that go beyond hype. 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮 𝗳𝘂𝘁𝘂𝗿𝗲-𝗿𝗲𝗮𝗱𝘆 𝗔𝗜 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗶𝘀𝗻’𝘁 𝗮𝗯𝗼𝘂𝘁 𝗰𝗵𝗮𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗺𝗼𝗱𝗲𝗹 𝗿𝗲𝗹𝗲𝗮𝘀𝗲.   𝗜𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝘀𝗼𝗹𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝗵𝗮𝗿𝗱 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 — 𝗱𝗮𝘁𝗮, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗥𝗢𝗜 — 𝘁𝗼𝗱𝗮𝘆.

  • View profile for Lenny Rachitsky
    Lenny Rachitsky Lenny Rachitsky is an Influencer

    Deeply researched no-nonsense product, growth, and career advice

    359,206 followers

    My biggest takeaway from Chip Huyen: 1. Most AI product problems aren’t AI problems. When companies think they have an AI performance issue, it’s usually a user experience problem, an organizational communication gap, or a data quality issue. One company thought their AI lead scoring system was broken, but the real issue was that the marketing team wasn’t asking the right questions to get useful data. 2. Your best performers benefit most from AI tools. In a controlled experiment, the highest-performing engineers got the biggest productivity boost from AI coding assistants, not the lowest performers. Senior engineers who already knew how to solve problems used AI to work even faster, while low performers often just copied and pasted code they didn’t understand. 3. How you prepare your data matters more than which database you choose. Companies see their biggest AI performance gains from better organizing and preparing their information—breaking content into the right size chunks, adding summaries, converting content into question-and-answer format—rather than agonizing over which technical infrastructure to use. 4. The biggest improvements to your AI product come from talking to users and understanding their feedback, not from adopting the latest models or staying glued to AI news. Many companies waste time debating which technology to use, when the real wins come from better user experience and data preparation. 5. Fine-tuning should be your last resort. Before investing in fine-tuning a model, try simpler solutions first: improve your prompts, add basic post-processing scripts, or fix your data pipeline. One company caught 90% of its model’s mistakes with a simple script. Fine-tuning creates ongoing maintenance headaches and should only be used when everything else has been maxed out. 6. You don’t need to be perfect to win. Many successful companies choose “good enough” over perfect when implementing AI systems. They calculate whether investing two engineers to improve accuracy from 80% to 85% is better than using those same engineers to launch an entirely new feature. Often, the new feature provides more value. 7. AI productivity is nearly impossible to measure. Companies invest heavily in AI coding tools but can’t clearly prove they work. When forced to choose between expensive AI subscriptions for their team or hiring one additional person, many managers choose the person, not necessarily because AI doesn’t help but because headcount feels more tangible. 8. Many people don’t know what to build despite having powerful tools. Even with AI tools that can build almost anything, many employees face an “idea crisis”—they simply don’t know what to create. The best approach: spend a week noticing what frustrates you in your daily work, then build small tools to solve those specific pain points.

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    170,429 followers

    Last week, a customer said something that stopped me in my tracks: “Our data is what makes us unique. If we share it with an AI model, it may play against us.” This customer recognizes the transformative power of AI. They understand that their data holds the key to unlocking that potential. But they also see risks alongside the opportunities—and those risks can’t be ignored. The truth is, technology is advancing faster than many businesses feel ready to adopt it. Bridging that gap between innovation and trust will be critical for unlocking AI’s full potential. So, how do we do that? It comes down understanding, acknowledging and addressing the barriers to AI adoption facing SMBs today: 1. Inflated expectations Companies are promised that AI will revolutionize their business. But when they adopt new AI tools, the reality falls short. Many use cases feel novel, not necessary. And that leads to low repeat usage and high skepticism. For scaling companies with limited resources and big ambitions, AI needs to deliver real value – not just hype. 2. Complex setups Many AI solutions are too complex, requiring armies of consultants to build and train custom tools. That might be ok if you’re a large enterprise. But for everyone else it’s a barrier to getting started, let alone driving adoption. SMBs need AI that works out of the box and integrates seamlessly into the flow of work – from the start. 3. Data privacy concerns Remember the quote I shared earlier? SMBs worry their proprietary data could be exposed and even used against them by competitors. Sharing data with AI tools feels too risky (especially tools that rely on third-party platforms). And that’s a barrier to usage. AI adoption starts with trust, and SMBs need absolute confidence that their data is secure – no exceptions. If 2024 was the year when SMBs saw AI’s potential from afar, 2025 will be the year when they unlock that potential for themselves. That starts by tackling barriers to AI adoption with products that provide immediate value, not inflated hype. Products that offer simplicity, not complexity (or consultants!). Products with security that’s rigorous, not risky. That’s what we’re building at HubSpot, and I’m excited to see what scaling companies do with the full potential of AI at their fingertips this year!

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    719,465 followers

    The real challenge in AI today isn’t just building an agent—it’s scaling it reliably in production. An AI agent that works in a demo often breaks when handling large, real-world workloads. Why? Because scaling requires a layered architecture with multiple interdependent components. Here’s a breakdown of the 8 essential building blocks for scalable AI agents: 𝟭. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 Frameworks like LangGraph (scalable task graphs), CrewAI (role-based agents), and Autogen (multi-agent workflows) provide the backbone for orchestrating complex tasks. ADK and LlamaIndex help stitch together knowledge and actions. 𝟮. 𝗧𝗼𝗼𝗹 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 Agents don’t operate in isolation. They must plug into the real world:  • Third-party APIs for search, code, databases.  • OpenAI Functions & Tool Calling for structured execution.  • MCP (Model Context Protocol) for chaining tools consistently. 𝟯. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗦𝘆𝘀𝘁𝗲𝗺𝘀 Memory is what turns a chatbot into an evolving agent.  • Short-term memory: Zep, MemGPT.  • Long-term memory: Vector DBs (Pinecone, Weaviate), Letta.  • Hybrid memory: Combined recall + contextual reasoning.  • This ensures agents “remember” past interactions while scaling across sessions. 𝟰. 𝗥𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀 Raw LLM outputs aren’t enough. Reasoning structures enable planning and self-correction:  • ReAct (reason + act)  • Reflexion (self-feedback)  • Plan-and-Solve / Tree of Thought These frameworks help agents adapt to dynamic tasks instead of producing static responses. 𝟱. 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗕𝗮𝘀𝗲 Scalable agents need a grounding knowledge system:  • Vector DBs: Pinecone, Weaviate.  • Knowledge Graphs: Neo4j.  • Hybrid search models that blend semantic retrieval with structured reasoning. 𝟲. 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗘𝗻𝗴𝗶𝗻𝗲 This is the “operations layer” of an agent:  • Task control, retries, async ops.  • Latency optimization and parallel execution.  • Scaling and monitoring with platforms like Helicone. 𝟳. 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 & 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 No enterprise system is complete without observability:  • Langfuse, Helicone for token tracking, error monitoring, and usage analytics.  • Permissions, filters, and compliance to meet enterprise-grade requirements. 𝟴. 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 & 𝗜𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲𝘀 Agents must meet users where they work:  • Interfaces: Chat UI, Slack, dashboards.  • Cloud-native deployment: Docker + Kubernetes for resilience and scalability. Takeaway: Scaling AI agents is not about picking the “best LLM.” It’s about assembling the right stack of frameworks, memory, governance, and deployment pipelines—each acting as a building block in a larger system. As enterprises adopt agentic AI, the winners will be those who build with scalability in mind from day one. Question for you: When you think about scaling AI agents in your org, which area feels like the hardest gap—Memory Systems, Governance, or Execution Engines?

  • View profile for Graham Walker, MD
    Graham Walker, MD Graham Walker, MD is an Influencer

    Healthcare AI — MDCalc & Offcall Founder — ER Doctor @ TPMG (views are my own, not employers’)

    67,101 followers

    Sitting in Epic’s massive UGM auditorium, the 100+ new AI features didn’t feel exciting. They felt overwhelming. Because it’s clearer than ever: AI is on an exponential curve, while humans and healthcare orgs are stuck on a flat line, barely nudging the slope. The gap isn’t a technical one — it’s change management. And until someone closes it, AI will keep sprinting ahead. I’ve rolled out tech to physicians for a decade. The hardest part is never the software; it’s the change management. Especially in healthcare, where you can’t just close the office for an "AI inservice." Doctors are already sprinting — 100 patients a week, fires everywhere — and just when they finally get comfortable with one workflow, someone moves the button they’re supposed to click. The most common complaint I hear? 𝘏𝘦𝘺, 𝘺𝘰𝘶 𝘮𝘰𝘷𝘦𝘥 𝘮𝘺 𝘤𝘩𝘦𝘦𝘴𝘦! Which brings me to the paradox doctors live every day: 👩⚕️ There’s no time for doctors to train or learn — because that means lost revenue. Everyone wants max efficiency out of us, but also zero errors. 📱 Tech companies brag about “hallucination-free copilots” but won’t take responsibility when they’re wrong. The fine print: the clinician is always liable. 👨⚕️ Doctors are left carrying the load: supposed to instantly learn, perfectly apply, and reconcile both demands — while still doing the actual job. And if you think AI will just replace doctors? All you’ve done is shove the change management onto patients. Good luck with that. Need proof this isn’t just doctors? Linkedin News says 41% of professionals report AI’s pace is taking a toll on their well-being — and more than half say learning AI feels like a second job. The ultimate winners here are those who can educate and do change management the best.

  • View profile for Christopher Penn
    Christopher Penn Christopher Penn is an Influencer

    Co-Founder & Chief Data Scientist at TrustInsights.ai, AI Expert, AI Keynote Speaker

    47,166 followers

    AI "detectors" are a joke. Here's a screen shot of an AI detector (self-proclaimed "the most advanced AI detector on the market") saying that 97% of this document was generated by AI. 97%. That's an incredibly confident assessment. It's also completely wrong. The text? That's the US Declaration of Independence, written 246 years before ChatGPT launched. Now, why did this happen? Two reasons. First, AI detectors use a relatively small number of metrics like perplexity and burstiness to assess documents. Documents that have little variation in vocabulary and relatively similar line lengths will get flagged, and the Declaration of Independence meets both. Second, AI detectors also use AI, typically smaller, less costly models. Those models are trained on the same data as their bigger cousins. And that means they've seen documents like the Declaration of Independence as valid training data... which they then probably look for. It's the AI equivalent of sneaking a peek at the answers on the exam - they've seen this data before and they know it goes into AI models. The key takeaway is this: AI detectors are worthless. Show this example when someone loudly proclaims that they've found AI-generated anything. If you're a parent challenging a school's use of these garbage tools, use this example to contest the school's incorrect assessment. Are there giveaways that something's been generated by AI? Yes, but fewer and fewer every day as models advance. What's the solution if we want to know whether a piece of content was generated by AI? The onus is on the creator to show the lineage and provenance of the content - the content equivalent of a DOP certification. #AI #GenerativeAI #GenAI #ChatGPT #ArtificialIntelligence #LargeLanguageModels #MachineLearning #IntelligenceRevolution

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,463,756 followers

    Separate reports by the publicity firm Edelman and Pew Research (links in orig text, below) show that Americans, and more broadly large parts of Europe and the western world, do not trust AI and are not excited about it. Despite the AI community’s optimism about the tremendous benefits AI will bring, we should take this seriously and not dismiss it. The public’s concerns about AI can be a significant drag on progress, and we can do a lot to address them. According to Edelman’s survey, in the U.S., 49% of people reject the growing use of AI, and 17% embrace it. In China, 10% reject it and 54% embrace it. Pew’s data also shows many other nations much more enthusiastic than the U.S. about AI adoption. Positive sentiment toward AI is a huge national advantage. On the other hand, widespread distrust of AI means: - Individuals will be slow to adopt it. For example, Edelman’s data shows that, in the U.S., those who rarely use AI cite Trust (70%) more than lack of Motivation and Access (55%) or Intimidation by the technology (12%) as an issue. - Valuable projects that need societal support will be stymied. For example, local protests in Indiana brought down Google’s plan to build a data center there. Hampering construction of data centers will hurt AI’s growth. Communities do have concerns about data centers beyond the general dislike of AI; I will address this in a later letter. - Populist anger against AI raises the risk that laws will be passed that hamper AI development. To be clear, all of us working in AI should look carefully at both the benefits and harmful effects of AI (such as deepfakes polluting social media and biased or inaccurate AI outputs misleading users), speak truthfully about both benefits and harms, and work to ameliorate problems even as we work to grow the benefits. But hype about AI’s danger has done real damage to trust in our field. Much of this hype has come from leading AI companies that aim to make their technology seem extraordinarily powerful by, say, comparing it to nuclear weapons. Unfortunately, a significant fraction of the public has taken this seriously and thinks AI could bring about the end of the world. The AI community has to stop self-inflicting these wounds and work to win back society’s trust. Where do we go from here? First, to win people’s trust, we have a lot of work ahead to make sure AI broadly benefits everyone. “Higher productivity” is often viewed by general audiences as a codeword for “my boss will make more money,” or worse, layoffs. As amazing as ChatGPT is, we still have a lot of work to do to build applications that make an even bigger positive impact on people’s lives. I believe providing training to people will be a key piece of the puzzle. DeepLearning.AI will continue to lead the charge on AI training, but we will need more than this. [Truncated for length. Full text, with links: https://lnkd.in/gUgMDMGS ]

  • View profile for Tom Chavez
    Tom Chavez Tom Chavez is an Influencer

    Co-Founder, super{set}

    18,655 followers

    We shut down Kapstan. There’s been chatter, so let me say it plainly:  We worked hard with a superb, committed team and built a strong product. But it didn’t work. Entrepreneurship is packed with highlight reels. What’s missing are the postmortems. So here’s mine. Kapstan was designed to solve a problem we felt again and again at super{set}: infrastructure chaos, complexity, and redundancy. Every time we launched a new company, we pretended we’d never stood up a cloud-based company, rebuilt clusters from scratch, and chased the same five DevOps experts to help us build them. If we were feeling this pain, others had to be, too. So we built what we wished we had: an opinionated, infra orchestrator that worked out of the box. Simpler deployments. A path to simplicity, speed, and cost reduction in cloud ops. The problem was real. The product was solid. And we still failed. The outcome doesn’t reflect a problem with product architecture or customer need. What we underestimated was the psychological architecture inside the companies we were selling to. The economic buyer — often a CEO — didn’t understand infrastructure well enough to question the status quo, and didn’t want to look impetuous by shaking it up. The user — typically a DevOps lead — saw our solution as encroaching on their patch. It wasn’t, but it *felt* like it was. When someone’s job identity is tied to building and tuning the system themselves, plug-and-play orchestration can feel like a power grab. So even with clear ROI and a best-in-class product, the most common reaction was: “We’ve got it handled.” And the places where we were able to convince the buyer that they didn’t have it handled? Sales cycles that turned into long, soul-sucking, pride-swallowing sieges. The surge in AI made a hard thing even harder. Without a path to AI-ifying our offering beyond a few product marketing flourishes, it was clear that future investors chasing AI were unlikely to catch what we were pitching. We assumed real pain would lead to real adoption. But adoption isn’t just logical. It’s political. It’s emotional. You can be right about the problem and still lose. You can build a better mouse trap and still get left outside the gate. So what do you do? You learn, and you lace up again tomorrow.

  • View profile for Sol Rashidi, MBA
    Sol Rashidi, MBA Sol Rashidi, MBA is an Influencer
    112,246 followers

    Before you say yes to an AI project, ask these questions. I was recently advising a company that had paid nearly a million dollars to a major consulting firm for an AI strategy. They came back with 12 use cases. Beautiful deck. Impressive ROI projections. The executives were excited. Then they brought me in to help with execution. After running each use case through my Complexity vs. Criticality framework, I had to deliver some hard news: "Nine of these twelve? You can't even deploy them." They were stunned. "Why not?" "Because you don't have the infrastructure. The data isn't accessible. The people you'd need are already stretched across five other projects. You could POC all of these beautifully but you'd never push them into production." This happens all the time. Companies pay for strategy. They get a beautiful roadmap. And then they discover that no one asked the hard questions about whether it was actually executable. So before you greenlight your next AI initiative, ask: Do we have the infrastructure to support this in production, not just in a sandbox? Is the data accessible, clean, and governed? Or are we going to spend six months just getting permissions from eight different application owners? Are the right people available? Or are we putting this on the same overworked team that's already behind on three other priorities? Can we realistically deploy this? Not just demo it. Deploy it. If you can't answer yes to all four, you're signing up for Perpetual POC Purgatory. I always say: Strategy without proper execution is just hallucination. Anyone can dream. Anyone can put impressive numbers on a slide. The hard part is following through. Ask the hard questions upfront. Your future self will thank you. What questions do you ask before starting an AI project? 👇 #AI #AIStrategy #Leadership #DataStrategy #Execution #ProjectManagement

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of the Board of Irish Museum of Modern Art | PhD in AI & Copyright

    59,686 followers

    In a MAJOR ruling for European copyright law, the Munich Regional Court has sided with Germany’s music rights society GEMA against OpenAI, finding that the company’s ChatGPT model unlawfully used copyrighted song lyrics in its training and responses. The decision, issued this morning, marks the first major European court judgment holding an AI company liable for using protected works without a licence. I got into AI through being Director of Legal Affairs and Regulatory Compliance in IMRO, the Irish counterpart of GEMA - and I know the people in GEMA - so this is very interesting to me. The case centred on GEMA’s allegation that OpenAI trained ChatGPT on its repertoire of German song lyrics, allowing the chatbot to reproduce works by artists such as Helene Fischer and Herbert Grönemeyer. The court agreed, concluding that the model’s ability to reproduce lyrics word for word demonstrated that the works had been used in training. It ruled that OpenAI is liable for copyright infringement and prohibited ChatGPT from reproducing lyrics from GEMA-represented artists unless a licence is obtained. The court also held that the European Union’s Text and Data Mining exceptions cannot shield generative AI systems that “memorise” and reproduce copyrighted material. This reasoning undermines one of the primary legal defences AI developers have relied upon in Europe. While damages will be determined in a separate proceeding, the court’s finding of liability alone sets a powerful precedent. OpenAI has announced plans to appeal. The 42nd Civil Chamber of the Munich Regional Court had indicated its position in September, when it observed that the model’s outputs could not be explained without training on copyrighted material. The final judgment confirmed that assessment. For the wider AI sector, the ruling suggests that AI companies operating in the European Union may need explicit licences for any copyrighted content used in model training or risk litigation. The decision also has regulatory implications. It aligns with growing momentum within the EU to enforce transparency and rights-holder protections under the AI Act and the Copyright in the Digital Single Market Directive. The GEMA v OpenAI ruling diverges sharply from Bartz v Anthropic in the United States. In Bartz, Judge Alsup found that AI training on copyrighted material could qualify as fair use, meaning no licence is required when the use is deemed transformative and non-substitutive. He viewed training as an analytical process that teaches the model general patterns rather than reproducing expression. The Munich court took the opposite view, holding that using protected works in AI training without permission constitutes reproduction requiring a licence. This illustrates the growing divide between the U.S. model, where fair use can exempt AI developers from licensing duties, and the European approach, which treats copyright as an enforceable economic right demanding prior authorisation.

Explore categories