AI Workflow Enhancement

Explore top LinkedIn content from expert professionals.

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    DeepLearning.AI, AI Fund and AI Aspire

    2,462,143 followers

    I think AI agentic workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it. Today, we mostly use LLMs in zero-shot mode, prompting a model to generate final output token by token without revising its work. This is akin to asking someone to compose an essay from start to finish, typing straight through with no backspacing allowed, and expecting a high-quality result. Despite the difficulty, LLMs do amazingly well at this task! With an agentic workflow, however, we can ask the LLM to iterate over a document many times. For example, it might take a sequence of steps such as: - Plan an outline. - Decide what, if any, web searches are needed to gather more information. - Write a first draft. - Read over the first draft to spot unjustified arguments or extraneous information. - Revise the draft taking into account any weaknesses spotted. - And so on. This iterative process is critical for most human writers to write good text. With AI, such an iterative workflow yields much better results than writing in a single pass. Devin’s splashy demo recently received a lot of social media buzz. My team has been closely following the evolution of AI that writes code. We analyzed results from a number of research teams, focusing on an algorithm’s ability to do well on the widely used HumanEval coding benchmark. You can see our findings in the diagram below. GPT-3.5 (zero shot) was 48.1% correct. GPT-4 (zero shot) does better at 67.0%. However, the improvement from GPT-3.5 to GPT-4 is dwarfed by incorporating an iterative agent workflow. Indeed, wrapped in an agent loop, GPT-3.5 achieves up to 95.1%. Open source agent tools and the academic literature on agents are proliferating, making this an exciting time but also a confusing one. To help put this work into perspective, I’d like to share a framework for categorizing design patterns for building agents. My team AI Fund is successfully using these patterns in many applications, and I hope you find them useful. - Reflection: The LLM examines its own work to come up with ways to improve it. - Tool use: The LLM is given tools such as web search, code execution, or any other function to help it gather information, take action, or process data. - Planning: The LLM comes up with, and executes, a multistep plan to achieve a goal (for example, writing an outline for an essay, then doing online research, then writing a draft, and so on). - Multi-agent collaboration: More than one AI agent work together, splitting up tasks and discussing and debating ideas, to come up with better solutions than a single agent would. I’ll elaborate on these design patterns and offer suggested readings for each next week. [Original text: https://lnkd.in/gSFBby4q ]

  • View profile for Kyle Poyar

    Growth Unhinged | Real-life growth insights, playbooks, and case studies

    107,177 followers

    I asked 195 B2B go-to-market leaders about where they're placing their bets for 2026. The top channel bet: AI discovery aka AEO. Wow, things escalated quickly... One company that got a head start on AEO is Webflow, the website building scaleup. Their stats via VP of growth Josh Grant: 1. 10% of signups now come from AI discovery, growing 4x year-on-year. (This is actual LLM-referred traffic, which likely understates things.) 2. 91% of LLM referrals come from ChatGPT alone. 3. ChatGPT traffic converts at 24% (!), 6x higher than Google. 4. For conversions referred by an LLM, two-in-three convert within 7 days. 3 tactics from Webflow you can apply in the next 24 hours (& one to avoid): Avoid: Add an llms.txt file - The Webflow team tried it. They haven’t seen any significant lift. - The takeaway: focus on content optimizations instead. Tactic 1: Automate content refreshing at scale - AI reshuffles answers constantly. Refresh velocity can be the difference between staying on top and missing out. - Webflow built an AI-driven workflow with AirOps to 5x their refresh frequency. Tactic 2: Turn every webinar into 10 pieces of expert content - Webinars can make great source material. Repurposing makes them fresh, structured, and consistently discoverable by both people and AI. - Webflow automates this by transcribing webinars (AirOps), using LLMs to identify themes & soundbites, generating assets & adding an editorial review. Tactic 3: Automate FAQs and schema content for AI discovery - FAQ sections answer long-tail questions and help LLMs get a more granular understanding of the product. ChatGPT can essentially "borrow" your FAQ answers in their repsonses. - Webflow automates this by scraping what people are asking via Reddit & Google (AirOps), generating new FAQs & answers (GPT-5, Claude), pushing updates into the CMS & then tracking any visibility shifts before/after. --- The full story is out NOW in Growth Unhinged: https://lnkd.in/eXP-gnFN Hope you find it useful 🙏 #aeo #marketing #chatgpt

  • View profile for Jason M. Lemkin
    Jason M. Lemkin Jason M. Lemkin is an Influencer

    SaaStr AI 2026 is May 12-14 in SF Bay!! See You There!!

    306,442 followers

    We sent 4,495 AI SDR emails in 2 weeks and achieved the #1 response rate on our platform. But here's what nobody tells you about making AI SDRs actually work... The Metrics: ✅ 4,495 personalized messages sent in 14 days ✅ Highest response rate on our entire platform ✅ $700,000 of pipeline opportunities opened ✅ Meetings booked daily (literally got one this morning) ✅ Outperformed all our historical human SDR averages — mostly ✅ Better results than some of our human AEs The Reality Check First We had unfair advantages. SaaStr has been around since 2012, we've sold $100,000,000 in sponsorships, and people know our brand. We targeted our existing database—website visitors, past attendees, lapsed accounts—not cold lists. We spent 2 weeks doing basically nothing else: 90 minutes every morning, 1 hour every evening training our AI, plus real-time responses throughout the day. 👉What Actually Works: 1️⃣ Your AI has to add real value, not just volume There's no way we could send 4,495 good emails ourselves manually in two weeks. The key is each one has to be at the level we would write ourselves. Bad: "Hey [NAME], saw you visited our website" Good: "Congrats on your new VP role at Oracle. Since you attended SaaStr London last year, thought you'd want to know about our 2025 VC track with speakers from a16z and Sequoia..." 2️⃣ Your data is messier than you think We trained our AI on 20+ million words of SaaStr content, but still found: - Opportunities never logged in Salesforce - Missing context from AEs who never used the system - Customer relationships that existed nowhere in our CRM We literally spend time every day finding things that were missing and manually adding them to AI's knowledge base. 3️⃣ Human-in-the-loop isn't optional When prospects respond to your AI, YOU have to respond instantly at the same quality level. We have it hooked up to Slack—our phones go off at all hours because SaaStr is global. The AI creates an expectation of responsiveness. You better match it or they'll know it was "just an AI email." 5️⃣ This is additive, not replacement We still do personal emails, marketing campaigns, and have human SDRs. Results by campaign type: - Website visitors: Hit or miss - Cold outbound: Ranked 4th out of 4 campaigns - Lapsed renewal accounts: Really good results 🏋🏽♀️ The Uncomfortable Truth: It's MORE work, not less. You get 10x better output, but it requires S-tier human orchestration. E.g., we're running 30+ personas across different campaigns. 🔮 Bottom line: AI SDRs work incredibly well, but only with proper training and orchestration. After 60 days of daily improvements, you'll have something you're proud of. But you can't skip the daily 30-45 minute audit process. Full breakdown with all our tools and processes at link in comments.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect & Engineer | AI Strategist

    719,381 followers

    When working with multiple LLM providers, managing prompts, and handling complex data flows — structure isn't a luxury, it's a necessity. A well-organized architecture enables: → Collaboration between ML engineers and developers → Rapid experimentation with reproducibility → Consistent error handling, rate limiting, and logging → Clear separation of configuration (YAML) and logic (code) 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗧𝗵𝗮𝘁 𝗗𝗿𝗶𝘃𝗲 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 It’s not just about folder layout — it’s how components interact and scale together: → Centralized configuration using YAML files → A dedicated prompt engineering module with templates and few-shot examples → Properly sandboxed model clients with standardized interfaces → Utilities for caching, observability, and structured logging → Modular handlers for managing API calls and workflows This setup can save teams countless hours in debugging, onboarding, and scaling real-world GenAI systems — whether you're building RAG pipelines, fine-tuning models, or developing agent-based architectures. → What’s your go-to project structure when working with LLMs or Generative AI systems? Let’s share ideas and learn from each other.

  • View profile for Alex Barády

    ENDGAME Founder | AI Entrepreneur, Executive Advisor, & Investor | ex-EPAM Co-Head of Europe

    72,346 followers

    AI changed the PM role as we know it. PMs who adapt will be in high demand. The PM role has been about process management. But a new group of project managers is emerging. They're using AI to speed up admin work, and turning their focus to strategic leadership. Most of what project managers do manually today is exactly what AI tools are getting good at: - Status report generation - Project planning and scheduling - Budget tracking and forecasting - Risk monitoring and alerts - Team capacity planning AI can already automate these PM tasks. The technology will only get better. I recently spoke with several executives. They are already moving basic PM tasks to AI. When AI can generate project plans, track budgets, and monitor risks automatically, what happens to the old-school PM role? Simple projects won't need dedicated PMs anymore. AI will handle the basic administrative PM work. We're already seeing this change. But there's a big opportunity here too. AI has a major blind spot. It can't figure out the tricky psychology of team dynamics. It can't handle complex stakeholder politics. It can't connect business goals to what motivates the team. AI can't lead and handle people's problems. → Simple projects: PM roles mix into other roles → Complex enterprises: Strategic PM roles become key Most valuable projects are complex and people-focused. Want to stay relevant? Here's what to think about. Learn how to handle team dynamics ↳ Navigate politics, egos, and conflicting priorities Master stakeholder management and communication ↳ Make sure everyone agrees on what success looks like Study how turn business goals into team motivation ↳ Work with people to get them excited about the project Direct AI with human context ↳ Give AI the right instructions and priorities to work with The PM industry isn't dead. But it is changing. ♻️ Share this to help other project managers. Follow me Alex Barady and my company ENDGAME for pragmatic AI strategies and execution.

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    AI Infrastructure Product Leader | Scaling GPU Clusters for Frontier Models | Microsoft Fairwater | Former AWS, Amazon | Startup Investor | Linkedin Top Voice | I build the infrastructure that allows AI to scale

    228,487 followers

    If you’re learning AI automation without a roadmap, you’re guaranteed to get overwhelmed. People usually “learn AI automation” by jumping straight into tools… and then wonder why nothing works consistently. Real automation requires structure - thinking, logic, testing, and a gradual build-up of skills. This 18-day roadmap breaks down the exact sequence to go from zero → confidently building automations with AI, APIs, tools, and no-code platforms. Here’s the full breakdown, day by day: Day 1 - AI Automation Fundamentals Learn what automation really means, how it differs from AI and agents, and see real examples. Day 2 - Automation Thinking Break work into steps, triggers, and outcomes - the mindset behind every good automation. Day 3 - APIs & Webhooks Basics Understand how apps communicate and how events trigger workflows. Day 4 - No-Code Automation Platforms Explore Zapier, Make, n8n - and how no-code tools actually run workflows. Day 5 - Build Your First Automation Create a simple trigger-action workflow and connect two apps. Day 6 - Data Handling Pass data between steps, map fields, and work with text, numbers, and dates. Day 7 - Logic & Error Handling Add filters, conditional logic, retries, and fallbacks to keep automations reliable. Day 8 - AI Model Basics Learn prompts vs system instructions, tokens, limits, and LLM behavior. Day 9 - Using AI Inside Automations Insert AI steps into workflows and parse structured AI outputs. Day 10 - Prompt Design for Automation Write consistent prompts and reduce hallucinations with JSON outputs. Day 11 - Text-Based Task Automation Automate email replies, summaries, CRM updates, and document tasks. Day 12 - Knowledge Automation (RAG Basics) Connect AI to internal documents and fetch accurate answers from real data. Day 13 - AI Agents Basics Understand agent planning, tools, and identify use cases for agents. Day 14 - Business Use Case Automation Automate lead qualification, ticket routing, and internal processes. Day 15 - Sales & Marketing Automation Personalize outreach, repurpose content, and automate follow-ups. Day 16 - Operations Automation Manage approvals, notifications, and repetitive operational tasks. Day 17 - Monitoring & Optimization Track workflow success, cut costs, and improve performance. Day 18 - Build & Ship Your System Design, test, document, and finalize a complete end-to-end automation. You don’t master AI automation by learning tools, you master it by learning systems thinking, data flow, and structured execution. Follow this roadmap, and you’ll build automations that are reliable, scalable, and business-ready.

  • View profile for Raj Goodman Anand
    Raj Goodman Anand Raj Goodman Anand is an Influencer

    Helping organizations build AI operating systems | Founder, AI-First Mindset®

    23,621 followers

    Last quarter, I worked with the MD of a heavy equipment manufacturer who believed AI would make status reports clearer and give leadership better visibility into project progress, but while the dashboards improved and the data looked sharper, the actual profit margins did not improve because delays were still being identified too late to prevent cost overruns. By the time problems appeared in reports, the financial impact had already occurred, and in 2026, with tighter compliance requirements and thinner operating buffers, that delay between issue and action is no longer affordable. What has truly changed is not reporting quality but execution speed, because AI systems can now reallocate resources, adjust schedules, and flag bottlenecks immediately instead of waiting for weekly or monthly review cycles; in plant upgrade programs and supplier transitions, I have seen problems addressed at the point of occurrence rather than after escalation. When corrective action happens closer to where the issue starts, delivery risk declines and cycle times shorten, since decisions are triggered by live data rather than by meetings or manual coordination. The main weakness I continue to see is governance, because many AI agents operate on fragmented data sources without clear ownership of decision rights, which leads teams to override outputs they do not trust and reintroduce manual controls that slow everything down, creating a false sense of stability where dashboards remain green but margin pressure builds quietly underneath. Two mistakes appear repeatedly. The first is treating AI as an advanced reporting layer, because manufacturing projects depend on operational control rather than visibility alone, and insight does not prevent delay unless the system is allowed to act within clearly defined boundaries. The second is deploying AI without defining who owns the decisions it influences, because manufacturing plants rely on accountability structures, and when escalation paths are unclear, agents can create conflicting actions that slow adoption and reduce confidence across teams. If you are beginning this journey, start by mapping a single workflow where approvals consistently delay progress, such as change requests during shutdown planning, and introduce AI only where decision rules are already stable and measurable, while avoiding areas that depend on negotiation or human judgment.  #AIInProjectManagement #AgenticAI #ExecutiveLeadership #FutureOfWork #OperationalExcellence0 #DecisionIntelligence #EnterpriseAI #ProjectGovernance #DigitalTransformation #AIForCEOs #BusinessExecution #AIStrategy

  • View profile for Dr. Brindha Jeyaraman

    Founder & CEO, Aethryx | Fractional Leader in Enterprise AI Engineering, Ops & Governance | Doctorate in Temporal Knowledge Graphs | Architecting Production-Grade AI | Ex-Google, MAS, A*STAR | Top 50 Asia Women in Tech

    18,577 followers

    🔍 Technical AI Series by Brindha Jeyaraman Part 2: Why GPUs Are Fast (and Why They’re Still Under-Utilised) GPUs dominate AI workloads because they’re designed for massive parallelism. Thousands of lightweight cores execute the same instruction across different data elements exactly what neural networks need for matrix math. And yet, in real-world training pipelines, it’s common to see 40–60% GPU utilisation. Why? Because GPUs don’t operate in isolation. Here are the usual culprits 👇 🔹 CPU-Bound Preprocessing Data loading, tokenisation, and augmentation often run on CPUs. If this stage is slow, the GPU simply waits. 🔹 Inefficient Data Loaders Single-threaded pipelines, poor shuffling strategies, or Python overhead can starve GPUs of data. 🔹 CPU–GPU Synchronisation Overhead Frequent synchronisation points and blocking calls introduce stalls between kernels. 🔹 Memory Access Patterns Non-contiguous tensors and frequent memory allocation/deallocation reduce effective throughput. The result? You pay for expensive accelerators that spend a significant portion of time doing nothing. High-performance AI training requires: 1. Asynchronous data pipelines 2. Careful CPU–GPU coordination 3. Stable tensor shapes 4. Minimised synchronisation points Owning AI performance means owning the entire pipeline, not just the model. In the next post, I’ll explain why attention the core of modern LLMs is fundamentally a memory problem, not just a compute one. #GPUComputing #AIInfrastructure #MLOps #SystemsEngineering

  • View profile for SUKIN SHETTY

    AI Architect | AI Product Builder | AI Educator Creator of Nemp Memory | Building GhostOps Helping Businesses & Individuals Build Real AI Systems

    8,094 followers

    I built a self-hosted AI architecture that runs without internet, no API cost, no cloud. AI works when network doesn't. This was the toughest project I’ve ever worked on and I did it to answer one question: Can we talk to AI when the internet is down and can we trust AI with sensitive data which cannot leave the building? Short answer: Yes. Meet Secure AI Lab. What it does: Works like ChatGPT, but lives on your computer and runs without internet Reads your own documents (protocols/policies) to answer with context. Automates tasks (save files, generate PDFs, log entries) locally. Runs fully offline after setup no cloud, no API keys, no telemetry. In the video, I switch Wi-Fi OFF and ask: “What medications are used for cardiac arrest??” OpenWebUI (local chatbot) answers from my local knowledgebase. n8n (local workflow) auto-creates a file on my disk with the summary. Every step happens on localhost. Nothing leaves the machine. ⚠️ Demo ≠ diagnosis. The medication shown is mock data; this is a clinical support example, not medical advice. Why this matters: Emergency Departments (ED) during downtime: keep triage guidance, protocol recall, and order prep running when EHR/internet is down. Hospitals, banks, factories: when privacy and reliability matter, local beats cloud. Cost control: one-time setup vs. indefinite per-token bills. How it works (simple flow) Inside the Lab: Local Brain – AI model (Ollama) generates answers on device. Your Documents – RAG reads your PDFs (protocols/policies) locally. Local Robot – n8n automations save files, generate PDFs, log to SQLite, print if needed. Not just Ollama offline. I built a complete offline system: chat UI + local RAG over my PDFs + automations that create PDFs/logs on disk, with Wi-Fi OFF and no egress. It’s a product, not just a model. I have added File-Watcher: when OpenWebUI saves a new answer, n8n auto-detects it and creates a PDF/log instantly, still with no internet. Stack at a glance OpenWebUI – local chat UI + RAG Ollama – runs the AI models on device n8n – no-code automations (write files, PDFs, logs) Docker – isolated, reproducible setup RAG – reads your docs; answers with citations SQLite/Files – local logs & artifacts (no cloud) This was my toughest build yet. I spent many weeks planning and stitching everything together to prove AI can run fully offline and still be useful in emergencies.

  • View profile for Shekhar Kirani
    Shekhar Kirani Shekhar Kirani is an Influencer

    Accel in India. Early-stage and growth-stage technology investor.

    39,988 followers

    𝐑𝐞𝐚𝐥 𝐨𝐩𝐩𝐨𝐫𝐭𝐮𝐧𝐢𝐭𝐲 𝐟𝐨𝐫 𝐀𝐈 𝐚𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐧𝐠 𝐞𝐧𝐭𝐞𝐫𝐩𝐫𝐢𝐬𝐞 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬 I have been meeting with many enterprise CXOs and AI advisory firms about AI adoption over the last few months. Almost all of them start the same way: 1. Map the current workflows. 2. Identify the manual steps. 3. Find where people are spending time. 4. Layer AI on top to automate or accelerate the work. This is the default playbook. And it is not wrong. It is the safe, best way to test and show quick results. A great entry point for AI. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: 𝐂𝐮𝐬𝐭𝐨𝐦𝐞𝐫 𝐬𝐮𝐩𝐩𝐨𝐫𝐭 𝐰𝐨𝐫𝐤𝐟𝐥𝐨w 1. Customer calls in. 2. L1 agent picks up, follows a script. 3. Cannot resolve. Escalates to L2. L2 reads the notes, asks the customer to repeat the problem, checks the knowledge base. Maybe escalates to L3. 4. Resolution happens 3 handoffs and 48 hours later. Most enterprise AI deployments in customer support follow the same default playbook: 1. Automating L1 with a voicebot 2. L2 with AI-assisted responses 3. Giving L3 a copilot. Same tiers, same structure, just faster and cheaper. 𝐖𝐡𝐲 𝐝𝐨 𝐭𝐡𝐞𝐬𝐞 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬 𝐞𝐱𝐢𝐬𝐭 𝐢𝐧 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐩𝐥𝐚𝐜𝐞? Most processes were designed around human limitations — quality, consistency, onboarding, training, error containment. 𝑩𝒖𝒕 𝒘𝒐𝒓𝒌𝒇𝒍𝒐𝒘𝒔 𝒂𝒓𝒆 𝒏𝒐𝒕 𝒕𝒉𝒆 𝒈𝒐𝒂𝒍. 𝑻𝒉𝒆𝒚 𝒂𝒓𝒆 𝒂 𝒎𝒆𝒂𝒏𝒔 𝒕𝒐 𝒕𝒉𝒆 𝒈𝒐𝒂𝒍. The goal was never "route through 3 tiers." If AI can access the full knowledge base, understand context, and maintain quality — why not give the customer or a single agent an AI tool that resolves it directly? Three tiers collapse into one. 𝐓𝐡𝐞 𝐫𝐞𝐚𝐥 𝐨𝐩𝐩𝐨𝐫𝐭𝐮𝐧𝐢𝐭𝐲 is to return to the original objective and move from multi-step process to single-step outcome as confidence builds. This is also where the biggest opening exists for new AI startups — not workflow automation, but outcome-based automation. 𝐈𝐌𝐏𝐎𝐑𝐓𝐀𝐍𝐓: Before you automate your current workflows, ask why they exist. The enterprises that will get the biggest AI wins are the ones redesigning toward outcomes — not just making existing steps faster.

Explore categories