AI In Professional Roles

Explore top LinkedIn content from expert professionals.

  • View profile for Ethan Mollick
    Ethan Mollick Ethan Mollick is an Influencer
    388,980 followers

    In our new paper we ran an experiment at Procter and Gamble with 776 experienced professionals solving real business problems. We found that individuals randomly assiged to use AI did as well as a team of two without AI. And AI-augmented teams produced more exceptional solutions. The teams using AI were happier as well. Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. The standard model of "AI as productivity tool" may be too limiting. Today’s AI can function as a kind of teammate, offering better performance, expertise sharing, and even positive emotional experiences. This was a massive team effort with work led by Fabrizio Dell'Acqua, Charles Ayoubi, and Karim Lakhani along with Hila Lifshitz, Raffaella Sadun, Lilach M., me and our partners at P&G: Yi Han, Jeff Goldman, Hari Nair and Stewart Taub Subatack about the work here: https://lnkd.in/ehJr8CxM Paper: https://lnkd.in/e-ZGZmW9

  • View profile for Bertalan Meskó, MD, PhD
    Bertalan Meskó, MD, PhD Bertalan Meskó, MD, PhD is an Influencer

    The Medical Futurist, Author of Your Map to the Future, Global Keynote Speaker, and Futurist Researcher

    366,450 followers

    BREAKING! The FDA just released this draft guidance, titled Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, that aims to provide industry and FDA staff with a Total Product Life Cycle (TPLC) approach for developing, validating, and maintaining AI-enabled medical devices. The guidance is important even in its draft stage in providing more detailed, AI-specific instructions on what regulators expect in marketing submissions; and how developers can control AI bias. What’s new in it? 1) It requests clear explanations of how and why AI is used within the device. 2) It requires sponsors to provide adequate instructions, warnings, and limitations so that users understand the model’s outputs and scope (e.g., whether further tests or clinical judgment are needed). 3) Encourages sponsors to follow standard risk-management procedures; and stresses that misunderstanding or incorrect interpretation of the AI’s output is a major risk factor. 4) Recommends analyzing performance across subgroups to detect potential AI bias (e.g., different performance in underrepresented demographics). 5) Recommends robust testing (e.g., sensitivity, specificity, AUC, PPV/NPV) on datasets that match the intended clinical conditions. 6) Recognizes that AI performance may drift (e.g., as clinical practice changes), therefore sponsors are advised to maintain ongoing monitoring, identify performance deterioration, and enact timely mitigations. 7) Discusses AI-specific security threats (e.g., data poisoning, model inversion/stealing, adversarial inputs) and encourages sponsors to adopt threat modeling and testing (fuzz testing, penetration testing). 8) And proposed for public-facing FDA summaries (e.g., 510(k) Summaries, De Novo decision summaries) to foster user trust and better understanding of the model’s capabilities and limits.

  • View profile for Tala Fakhouri

    Vice President, Regulatory Consulting: AI & Digital Policy, Real-World Research

    5,806 followers

    📢 FDA Issues First Draft Guidance for AI in Drug Development US FDA has released its first draft guidance addressing the use of artificial intelligence (AI) to support regulatory decision-making for drug and biological product safety, effectiveness, and quality. This guidance marks a critical step in providing a risk-based framework for assessing the credibility of AI models in specific contexts of use (COU). It underscores the FDA’s commitment to fostering innovation while maintaining the highest standards of safety, effectiveness, and regulatory rigor. 💡 Key Highlights: (1) A Risk-Based Framework: Provides a framework for assessing AI model credibility that starts with defining the question, decision, or concern being addressed by the AI model, the COU, and AI model risk. Assessing AI model risk is important because the credibility assessment activities used to establish the credibility of AI model outputs should be commensurate with AI model risk and tailored to the specific COU. (2) Early Engagement: Encourages sponsors and other interested parties (e.g., tech and biotech companies and AI tool developers) to engage with the FDA early in the development process to help address the appropriateness of AI use and for timely identification of challenges that may be associated with AI use in specific COUs. (3) Experience-Driven Framework: Builds on the FDA’s substantial experience with reviewing over 500 regulatory submissions with AI components since 2016. External Party Input: Informed by feedback from public workshops, industry, and academic experts, and by the over 800 comments received from over 65 organizations on the 2023 AI in drug development discussion paper. 📰 The FDA is seeking public comments on this guidance within 90 days. Sponsors and other interested parties are highly encouraged to provide feedback. Read the full draft guidance and submit your comments: https://lnkd.in/ef99X8ZF 🙏 Special thanks to OMP’s Marsha S., Janice Maniwang, PharmD, MBA, RAC, Mike Mayrosh, Phil Budashewitz, and Cecilia Almeida, and to many other colleagues across CDER, CBER, CDRH, CVM, OCE, and OII for their critical technical input and for ensuring alignment with CDRH guidances on AI, where appropriate. #AI #DrugDevelopment #FDA #Innovation #ArtificialIntelligence

  • View profile for Clare Kitching

    Transform your AI & data ambition into action | xQuantumBlack, xMcKinsey | Global top 100 Innovators in Data & Analytics | AI & data strategy, governance and capability building

    65,318 followers

    46% of employees are using AI tools their company never approved. And 32% are not telling anyone. That’s the reality of AI adoption in most organisations right now. Personal AI tools are cheap or free and people are experimenting. Unfortunately, governance hasn’t caught up. The instinctive response is to write a policy, maybe set up a committee. But a policy without enforcement is just a suggestion. And most organisations are still at the suggestion stage. Some surveys suggest only 25% of organisations have a fully implemented AI governance program. One way to think about it is that AI guardrails work at four levels. And you need to consider all four: 1️⃣ Governance Set risk appetite and accountability. 55% of organisations now have an AI oversight body. But only 27% have formally added AI to their board committee charters. 2️⃣ Operating model This is where strategy turns into behaviour. If a third of your team is hiding their AI use, the policy either doesn’t exist or nobody believes in it. 3️⃣ Process This is where safeguards live inside workflows with human reviews, approvals and monitoring. Most organisations underestimate this layer. Yet it is where many real risks appear. This layer is the difference between hoping things go well and actually controlling outcomes. 4️⃣ System Technical safeguards built directly into the tools. Most enterprise platforms now offer these out of the box. These controls run every time AI is used. Organisations are usually great at layers 1 & 2, but you can’t stop there. Start planning how to build all four layers. Because governance sets the rules. But systems and processes enforce them. Think of it like road safety: Signs help. But guardrails are what stop the crash. Where do you think most organisations are underinvesting right now? ♻️ Repost to help someone develop their AI governance. 🔔 Follow Clare Kitching for insights on unlocking value with data & AI. 💎 Get more from me with my free newsletter here: https://lnkd.in/ghBtk6jR

  • View profile for Peter Slattery, PhD

    MIT AI Risk Initiative | MIT FutureTech

    68,161 followers

    "The report outlines four key regulatory approaches to AI governance—industry self-governance, soft law, regulatory sandboxes, and hard law—each offering distinct advantages and challenges: 1. Industry Self-Governance • Strengths: Can directly impact AI practices if integrated into business models and company cultures. • Limitations: Non-binding; not appropriate for sectoral use-cases with particularly high risks – e.g. financial sector or healthcare; risk of ‘ethics-washing’. 2. Soft Law • Strengths: Soft law includes nonbinding international agreements, national AI principles, and technical standards, providing adaptable frameworks that promote responsible innovation. Early governance efforts by intergovernmental bodies have set important precedents. • Limitations: While soft law encourages innovation, it focuses on high-level principles rather than binding rights and responsibilities. 3. Hard Law • Strengths: Binding legal frameworks provide clear, enforceable guidelines that ensure AI stakeholders comply with established standards and regulations. • Limitations: Given the rapid pace of AI development, hard laws risk becoming outdated and can be extremely resource-intensive to implement. 4. Regulatory Sandboxes • Strengths: These controlled environments allow for real-world experimentation with AI technologies, supporting innovation and providing valuable insights without exposing the public to unchecked risks. • Limitations: Sandboxes can be resource-intensive and have limited scalability, making them less feasible for wide-scale governance across diverse sectors." Read/download: https://lnkd.in/etwyUaUK

  • View profile for Jason M. Lemkin
    Jason M. Lemkin Jason M. Lemkin is an Influencer

    SaaStr AI 2026 is May 12-14 in SF Bay!! See You There!!

    306,511 followers

    We sent 4,495 AI SDR emails in 2 weeks and achieved the #1 response rate on our platform. But here's what nobody tells you about making AI SDRs actually work... The Metrics: ✅ 4,495 personalized messages sent in 14 days ✅ Highest response rate on our entire platform ✅ $700,000 of pipeline opportunities opened ✅ Meetings booked daily (literally got one this morning) ✅ Outperformed all our historical human SDR averages — mostly ✅ Better results than some of our human AEs The Reality Check First We had unfair advantages. SaaStr has been around since 2012, we've sold $100,000,000 in sponsorships, and people know our brand. We targeted our existing database—website visitors, past attendees, lapsed accounts—not cold lists. We spent 2 weeks doing basically nothing else: 90 minutes every morning, 1 hour every evening training our AI, plus real-time responses throughout the day. 👉What Actually Works: 1️⃣ Your AI has to add real value, not just volume There's no way we could send 4,495 good emails ourselves manually in two weeks. The key is each one has to be at the level we would write ourselves. Bad: "Hey [NAME], saw you visited our website" Good: "Congrats on your new VP role at Oracle. Since you attended SaaStr London last year, thought you'd want to know about our 2025 VC track with speakers from a16z and Sequoia..." 2️⃣ Your data is messier than you think We trained our AI on 20+ million words of SaaStr content, but still found: - Opportunities never logged in Salesforce - Missing context from AEs who never used the system - Customer relationships that existed nowhere in our CRM We literally spend time every day finding things that were missing and manually adding them to AI's knowledge base. 3️⃣ Human-in-the-loop isn't optional When prospects respond to your AI, YOU have to respond instantly at the same quality level. We have it hooked up to Slack—our phones go off at all hours because SaaStr is global. The AI creates an expectation of responsiveness. You better match it or they'll know it was "just an AI email." 5️⃣ This is additive, not replacement We still do personal emails, marketing campaigns, and have human SDRs. Results by campaign type: - Website visitors: Hit or miss - Cold outbound: Ranked 4th out of 4 campaigns - Lapsed renewal accounts: Really good results 🏋🏽♀️ The Uncomfortable Truth: It's MORE work, not less. You get 10x better output, but it requires S-tier human orchestration. E.g., we're running 30+ personas across different campaigns. 🔮 Bottom line: AI SDRs work incredibly well, but only with proper training and orchestration. After 60 days of daily improvements, you'll have something you're proud of. But you can't skip the daily 30-45 minute audit process. Full breakdown with all our tools and processes at link in comments.

  • View profile for James O'Dowd

    Founder & CEO at Patrick Morgan | Talent & Advisory for Professional Services

    107,290 followers

    The pace of change in Accounting and Compliance over the past three months is materially ahead of what many incumbent firms and their investors are willing to publicly acknowledge or, indeed, are even aware of. From where we sit, partnering with several AI native startups in this space, the shift is not theoretical. It is visible in real time. These are technology first businesses hiring senior Accounting leaders not to manage delivery teams but to commercialize platforms built to absorb large portions of routine accounting and compliance work. The role of the accountant is moving from executor to validator and builder. Goldman Sachs deploying Claude to automate Accounting and Compliance work should be read through that lens. This is not a marginal efficiency play. When a systemically important financial institution embeds frontier AI into core finance and control processes, it signals that automation is moving from pilot to production. The direction of travel is structural. Accounting is not physics. It is a rules based discipline applied to structured financial data. Much of that data is already labelled through XBRL and regulatory reporting frameworks, which makes it highly trainable. The question is no longer whether AI can assist but how much of the workflow it can absorb and how quickly the economics reprice. Retrofitting AI into a legacy pyramid is fundamentally different from building an AI native delivery model from day one. The productivity gap is widening. The uncomfortable reality is that the sector is heading into margin compression and business model reset. We are already seeing tech led platforms willing to use commoditized compliance services as a loss leader to win higher value analytics and advisory revenue. If value is still being underwritten on utilization and leverage rather than net contribution after automation, the signal is being missed. The firms that respond early will redesign pricing, talent mix and partner economics. The rest will discover that apparent stability was masking erosion.

  • View profile for Ron Abraham, CPA

    Partner at KSDT CPA, Certified Public Accountant, Certified Acceptance Agent, Master in Tax. The road to success is always under construction. Success is not a comfortable procedure.

    34,787 followers

    PwC is Training Junior Accountants to Work Like Managers AI is changing the job before it even starts. According to PwC’s AI assurance leader Jenn Kosar, automation is taking over much of the repetitive audit work traditionally assigned to entry-level staff. As a result, new hires are being trained to supervise AI and take on responsibilities that used to be handled by accountants with 3–4 years of experience. Key changes in PwC’s approach: • Shift in skills focus: Critical thinking, negotiation, and professional skepticism are now core from day one. • AI integration: Routine tasks are automated, freeing staff to focus on higher-value work. • Revamped training: Career development now includes “assurance for AI” and preparing accountants to guide tech-driven workflows. PwC says these changes are designed to prepare staff for the future of accounting where technology handles the volume, and humans handle the judgment. This is where our industry is going. The sooner we adapt, the better we come out on the other side. #AI #Accounting #PwC #FutureOfWork #Careers

  • New evidence says discourse on how AI will reshape work is getting it wrong. It’s not that some jobs get automated away while others are augmented. Automation and augmentation are playing out in the same roles at the same time. In other words, AI is reshaping work within jobs rather than eliminating them. The “winners vs. losers” frame doesn’t hold. Our latest research at The Burning Glass Institute mines millions of job postings before and after the advent of LLM’s to track how AI is already reshaping skill demand. The finding is striking: we found a 0.87 correlation between the roles experiencing the greatest automation effects and those experiencing the greatest augmentation effects, meaning the jobs most vulnerable to automation are also those most empowered by AI. Tasks are disappearing and intensifying simultaneously—within the same roles, at the same time. In fact, we find that skills most exposed to AI automation were 16% more likely to see demand decline than baseline skills. Skills most exposed to AI augmentation were 7% more likely to see demand increase.   Project managers aren’t disappearing, but our analysis shows that spreadsheet-heavy tasks are fading while strategic, judgment-intensive work is growing. Financial analysts aren’t getting replaced, but model-building is automated while interpretation and decision-making matter more. The unit of change isn’t the job. It’s the task mix inside the job. Our paper, "Beyond the Binary", offers some of the first empirical evidence from the AI Tracking Hub, a multistakeholder initiative led by the Burning Glass Institute to move the AI–work conversation from forecasts to observation. If jobs aren’t vanishing but transforming from within, the real question isn’t “Which jobs are safe?” It’s whether our institutions—education, training, workforce policy—are built for continuous change rather than one-time transitions. You can find the report on https://lnkd.in/ej5FJu2J. I so enjoyed the collaboration with coauthors Benjamin Francis, Shrinidhi Rao, and Gwynn Guilford, and I am grateful as always to Gad Levanon and Stuart Andreason for their work to bring data-driven, empirical understanding to the workforce impacts of AI. #AI #artificialintelligence #jobs #economics #work.

  • View profile for Josh Aharonoff, CPA
    Josh Aharonoff, CPA Josh Aharonoff, CPA is an Influencer

    Building World-Class Financial Models in Minutes | 450K+ Followers | Model Wiz

    481,682 followers

    Will Accounting Be Replaced? 🤖 💼 Everyone's asking if AI will replace accountants... Let me settle this once and for all. ➡️ WHAT WILL TRANSFORM ADVISORY SERVICES are becoming the heart of what we do. Gone are the days when accountants just crunch numbers. Now we guide strategic decisions using real data insights. Companies need advisors who understand both numbers AND business strategy. FORENSIC ACCOUNTING gets supercharged with advanced analytics. Finding fraud used to be like searching for a needle in a haystack... With AI-powered anomaly detection, we spot patterns humans would miss. The fraudsters are getting smarter, but so are our tools. AUDIT & RISK ASSESSMENT will never go away, but everything about it is changing. Instead of sampling transactions once a year, we're moving to continuous auditing with real-time data. AI review systems flag issues as they happen, not months later when it's too late. FINANCIAL ANALYSIS & FORECASTING is where accountants shine brightest. Sure, AI can run calculations, but humans bring context to numbers. Our forecasting is getting enhanced by predictive analytics and scenario modeling that processes variables faster than ever before. CLIENT COMMUNICATION is shifting completely. We're moving from transaction processors to trusted advisors. ➡️ WHAT WILL BE REPLACED Let's be honest... some parts of accounting are tedious and perfect for automation. MANUAL DATA ENTRY is already on its way out. AI-driven data capture and OCR tools process invoices and receipts in seconds, without the errors humans make after hours of monotonous work. ROUTINE BOOKKEEPING tasks are getting automated through cloud accounting software. Bank feeds, automatic categorization, and machine learning mean the days of manually reconciling every transaction are numbered. BASIC TAX PREPARATION for standard situations will be handled by smart platforms. E-filing tools get smarter every tax season. The complex tax strategy work? That's still all us. INVOICE MATCHING & RECONCILIATION is perfect for automation. AI bots can match thousands of invoices to purchase orders in minutes, with real-time reconciliation systems keeping everything in sync. COMPLIANCE MONITORING no longer needs accountants to manually check every rule. Automated alerts and built-in compliance checks flag issues instantly, letting us focus on solving problems rather than finding them. ➡️ THE FUTURE ACCOUNTANT The accountants who will thrive aren't fighting against technology... They're embracing it. The future belongs to those who combine technical accounting knowledge with: - Strategic thinking - Business acumen - Technology fluency - Communication skills === What parts of your accounting job do you think will change the most with AI? Which skills are you developing to stay ahead? Join the discussion in the comments below 👇

Explore categories