The anatomy of a sales call has changed dramatically. Last week, I shadowed some of HubSpot’s top reps and what struck me was how differently the best sellers work today. They’re using AI at every stage: before, during, and after the call. And the results are real. The brain: before the call. AI does the heavy research — scanning 10Ks, news, emails, and past calls to surface the insights that matter most. Tools like Breeze Assistant can prep a full company overview in seconds. According to our State of Sales Report, 74% of sellers say buyers are showing up to calls more informed than ever before. Salespeople need to be just as ready. The heart: during the call. AI notetakers capture everything: next steps, budget mentions, open questions, so reps can focus on listening, not typing or scribbling notes on the side. Also, AI assistants surface the right case study or testimonial in real time, making every answer sharper and every example more relevant. That means as a sales rep you are more engaged and relevant. The muscle: after the call. AI follows through fast. It drafts personalized follow-up emails in your own voice, outlines next steps, and flags what needs attention. More time with customers and less time writing emails. The result: sellers who prepare better, connect deeper, and close faster. The anatomy of a great sales call used to be manual effort and hustle. Now, it’s human connection powered by intelligence.
AI-Powered Virtual Assistants
Explore top LinkedIn content from expert professionals.
-
-
AI models like ChatGPT and Claude are powerful, but they aren’t perfect. They can sometimes produce inaccurate, biased, or misleading answers due to issues related to data quality, training methods, prompt handling, context management, and system deployment. These problems arise from the complex interaction between model design, user input, and infrastructure. Here are the main factors that explain why incorrect outputs occur: 1. Model Training Limitations AI relies on the data it is trained on. Gaps, outdated information, or insufficient coverage of niche topics lead to shallow reasoning, overfitting to common patterns, and poor handling of rare scenarios. 2. Bias & Hallucination Issues Models can reflect social biases or create “hallucinations,” which are confident but false details. This leads to made-up facts, skewed statistics, or misleading narratives. 3. External Integration & Tooling Issues When AI connects to APIs, tools, or data pipelines, miscommunication, outdated integrations, or parsing errors can result in incorrect outputs or failed workflows. 4. Prompt Engineering Mistakes Ambiguous, vague, or overloaded prompts confuse the model. Without clear, refined instructions, outputs may drift off-task or omit key details. 5. Context Window Constraints AI has a limited memory span. Long inputs can cause it to forget earlier details, compress context poorly, or misinterpret references, resulting in incomplete responses. 6. Lack of Domain Adaptation General-purpose models struggle in specialized fields. Without fine-tuning, they provide generic insights, misuse terminology, or overlook expert-level knowledge. 7. Infrastructure & Deployment Challenges Performance relies on reliable infrastructure. Problems with GPU allocation, latency, scaling, or compliance can lower accuracy and system stability. Wrong outputs don’t mean AI is "broken." They show the challenge of balancing data quality, engineering, context management, and infrastructure. Tackling these issues makes AI systems stronger, more dependable, and ready for businesses. #LLM
-
Hey Salespeople: Here is a collection of current use cases for AI in sales & CS: ** GenAI in Sales ** --> Draft messaging for personalized email outreach --> Generate post-call summaries with action items; draft call follow ups --> Provide real-time, in-call guidance (case studies; objection handling; technical answers; competitive response) --> Auto-populate and clean up CRM --> Generate & update competitive battlecards --> Draft RFP responses --> Draft proposals & contracts --> Accelerate legal review & red-lining (incl. risk identification) --> Research accounts --> Research market trends --> Generate engagement triggers (press releases; job postings; industry news; social listening; etc.) --> Conduct role-play --> Enable continuous, customized learning --> Generate customized sales collateral --> Conduct win-loss analysis --> Automate outbound prospecting -->Automate inbound response --> Run product demos --> Coordinate & schedule meetings --> Handle initial customer inquiries (chatbot; voice-bot / avatar) --> Generate questions for deal reviews --> Draft account plans ** Predictive AI in Sales ** --> Score leads & contacts --> Score /segment accounts (new logo) --> Automate cross-sell & upsell recommendations --> Optimize pricing & discounting --> Surface deal gaps / identify at-risk prospects --> Optimize sales engagement cadences (touch type; frequency) --> Optimize territory building (account assignment) --> Streamline forecasting (incl. opportunity probabilities; stage; close date) --> Analyze AE performance --> Optimize sales process --> Optimize resource allocation (incl. capacity planning) --> Automate lead assignment --> A/B test sales messaging --> Priortize sales activities ** GenAI in CS ** --> Analyze customer sentiment --> Provide customer support (chatbot; voice-bot / avatar; email-bot) --> Draft proactive success messaging --> Update & expand knowledge base (incl. tutorials, guides, FAQs, etc.) --> Provide multilingual support --> Analyze customer feedback to inform product development, support, and success strategies --> Summarize customer meetings; draft follow-ups --> Develop customer training content and orchestrate customized training --> Provide real-time, in-call guidance to CSMs and support agents --> Create, distribute, and analyze customer surveys --> Update CRM with customer insights --> Generate personalized onboarding --> Automate customer success touch-points --> Generate customer QBR presentations --> Summarize lengthy or complex support tickets --> Create customer success plans --> Generate interactive troubleshooting guides --> Automate renewal reminders --> Analyze and action CSAT & NPS ** Predictive AI in CS ** --> Predict churn; score customer health; detect usage anomalies, decision maker turnover, etc. --> Analyze CSM and support agent performance --> Optimize CS and support resource allocation --> Prioritize support tickets --> Automate & optimize support ticket routing --> Monitor SLA compliance
-
AI products like Cursor, Bolt and Replit are shattering growth records not because they're "AI agents". Or because they've got impossibly small teams (although that's cool to see 👀). It's because they've mastered the user experience around AI, somehow balancing pro-like capabilities with B2C-like UI. This is product-led growth on steroids. Yaakov Carno tried the most viral AI products he could get his hands on. Here are the surprising patterns he found: (Don't miss the full breakdown in today's bonus Growth Unhinged: https://lnkd.in/ehk3rUTa) 1. Their AI doesn't feel like a black box. Pro-tips from the best: - Show step-by-step visibility into AI processes - Let users ask, “Why did AI do that?” - Use visual explanations to build trust. 2. Users don’t need better AI—they need better ways to talk to it. Pro-tips from the best: - Offer pre-built prompt templates to guide users. - Provide multiple interaction modes (guided, manual, hybrid). - Let AI suggest better inputs ("enhance prompt") before executing an action. 3. The AI works with you, not just for you. Pro-tips from the best: - Design AI tools to be interactive, not just output-driven. - Provide different modes for different types of collaboration. - Let users refine and iterate on AI results easily. 4. Let users see (& edit) the outcome before it's irreversible. Pro-tips from the best: - Allow users to test AI features before full commitment (many let you use it without even creating an account). - Provide preview or undo options before executing AI changes. - Offer exploratory onboarding experiences to build trust. 5. The AI weaves into your workflow, it doesn't interrupt it. Pro-tips from the best: - Provide simple accept/reject mechanisms for AI suggestions. - Design seamless transitions between AI interactions. - Prioritize the user’s context to avoid workflow disruptions. -- The TL;DR: Having "AI" isn’t the differentiator anymore—great UX is. Pardon the Sunday interruption & hope you enjoyed this post as much as I did 🙏 #ai #genai #ux #plg
-
A few months ago the idea of native checkout inside ChatGPT looked like the next big step in commerce. Not any more. 𝗜𝗻𝘀𝗶𝗱𝗲 𝗖𝗵𝗲𝗰𝗸𝗼𝘂𝘁 𝗶𝗻 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 was launched by OpenAI in September 2025, essentially allowing users to purchase products directly inside the chat interface. • As product discovery increasingly moved to AI assistants, the next logical step was to allow the transaction to happen in the same interface. • The idea was simple: if users already search, compare and decide inside ChatGPT, sending them to another website creates friction. • Instant Checkout was therefore an attempt to turn ChatGPT from a discovery layer into a commerce interface. • The goal for OpenAI was to capture more of the commerce journey by keeping users inside ChatGPT from product search all the way to payment. 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝗲𝗱 𝗶𝗻 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲: • Adoption remained extremely limited, with only a small number of merchants integrating the native checkout capability. • User behaviour showed that people were comfortable using ChatGPT to discover and compare products, but they were not completing purchases inside the chat interface. Recent reports indicate that OpenAI scaled back its plans for direct checkout inside ChatGPT, moving away from the “buy inside the chat” model and redirecting purchases to merchants’ websites or apps. 𝗜𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝗰𝗼𝗺𝗺𝗲𝗿𝗰𝗲: • Agentic commerce may emerge in stages rather than all at once. The first role AI assistants are clearly winning is discovery, comparison and decision support. The step where the agent actually executes the payment appears to be much harder to shift because it sits on top of trust, payment credentials, merchant integrations and fulfilment infrastructure. • The interface alone is not enough to move the transaction layer. Even if an AI assistant becomes the place where purchase decisions are made, the payment still tends to happen where the merchant relationship, payment acceptance, logistics and customer support already exist. • This could suggest that the early model for agentic commerce may look more like orchestration than direct execution. AI agents may guide the purchase journey and direct the transaction to existing merchant systems, rather than fully replacing them. • For the ecosystem, this shifts the focus to integration. What will matter most is which platforms, merchants and payment providers make it easiest for AI agents to connect to their systems and complete transactions. Those that become the default connections for AI assistants could end up capturing a significant share of the value in this new model. 𝗪𝗵𝗮𝘁 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗶𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝘁𝗵𝗮𝘁 𝘆𝗼𝘂 𝘀𝗲𝗲? Opinions: my own, Graphic source: OpenAI 𝐒𝐮𝐛𝐬𝐜𝐫𝐢𝐛𝐞 𝐭𝐨 𝐦𝐲 𝐧𝐞𝐰𝐬𝐥𝐞𝐭𝐭𝐞𝐫: https://lnkd.in/dkqhnxdg
-
From MIT SMR - how 14 companies across a wide range of industries are generating value from generative AI today: McKinsey built Lilli, a platform that helps consultants quickly find and synthesize information from past projects worldwide. The system integrates with over 40 internal sources and even reads PowerPoint slides, leading to 30% time savings and 75% employee adoption within a year. Amazon deploys AI across multiple divisions. Their pharmacy division uses an internal chatbot to help customer service representatives find answers faster. The finance team employs AI for everything from fraud detection to tax work. In their e-commerce business, they personalize product recommendations based on customer preferences and are developing new GenAI tools for vendors. Morgan Stanley empowers their financial advisers with a knowledge assistant trained on over a million internal documents. The system can summarize client video meetings and draft personalized follow-up emails, allowing advisers to focus more on client needs. Sysco, the food distribution giant, uses GenAI to generate menu recommendations for online customers and create personalized scripts for sales calls based on customer data. CarMax revolutionized their car research pages with GenAI, automatically generating content and summarizing thousands of customer reviews. They've since expanded to use AI in marketing design, customer chatbots, and internal tools. Dentsu transformed their creative agency work with GenAI, using it throughout the creative process from proposals to project planning. They can now generate mock-ups and product photos in real-time during client meetings, significantly improving efficiency. John Hancock deployed chatbot assistants to handle routine customer queries, reducing wait times and freeing human agents for complex issues. Major retailers like Starbucks, Domino's, and CVS are implementing GenAI voice interactions for customer service, moving beyond traditional phone menus. Tapestry, parent company of Coach and Kate Spade, uses real-time language modifications to personalize online shopping, mimicking in-store associate interactions. This led to a 3% increase in e-commerce revenue. Software companies are integrating GenAI directly into their products. Lucidchart allows users to create flowcharts through natural language commands. Canva integrated ChatGPT to simplify creation of visual content. Adobe embedded GenAI across their suite for image editing, PDF interaction, and marketing campaign optimization. For more information on these examples and to gain insight into how companies are transforming with GenAI, read the full article here: https://lnkd.in/eWSzaKw4 images: 4 of the 20 I created with Midjourney for this post. #AI #transformation #innovation
-
A few months ago, a colleague screamed at Microsoft Copilot like he was auditioning for Bring Me The Horizon. He typed, “Make this into a presentation.” Copilot spat out something. He yelled, “NO, I SAID PROFESSIONAL!” It revised it. Still wrong. “WHY ARE YOU SO STUPID?” And that, dear reader, is when it hit me. It’s not the AI. It’s you. Or rather, your prompts. So, if you've ever felt like ChatGPT, Copilot, Gemini, or any of those AI Agents are more "artificial" than "intelligent"? Then rethink how you’re talking to them. Here are 10 prompt engineering fundamentals that’ll stop you from sounding like you're yelling into the void. 1. Lead with Intent. Start with a clear command: “You are an expert…,” “Generate a monthly report…,” “Translate this to French…" This orients the model instantly. 2. Scope & Constraints First. Define boundaries up front. Length limits, style guides, data sources, even forbidden terms. 3. Format Your Output. Specify JSON schema, markdown headers, or table columns. Models love explicit structure over free form prose. 4. Provide Minimal, High Quality Examples. Two or three exemplar Q→A pairs beat a paragraph of explanation every time. 5. Isolate Subtasks. Break complex workflows into discrete prompts (chain of thought). One prompt per action: analyze, summarize, critique, then assemble. 6. Anchor with Delimiters. Use triple backticks or XML tags to fence inputs. Cuts hallucinations in half. 7. Inject Domain Signals. Name specific frameworks (“Use SWOT analysis,” “Apply the Eisenhower Matrix,” “Leverage Porter’s Five Forces”) to nudge depth. 8. Iterate Rapidly. Version your prompts like code. A/B test variations, track which phrasing yields the cleanest output. 9. Tune the “Why.” Always ask for reasoning steps. Always. 10. Template & Automate. Build parameterized prompt templates in your repo. Still with me? Good. Bonus tips. 1. Token Economy Awareness. Place critical context in the first 200 tokens. Anything beyond 1,500 risks context drift. 2. Temperature vs. Prompt Depth. Higher temperature amplifies creativity. Only if your prompt is concise. Otherwise you get noise. 3. Use “Chain of Questions.” Instead of one long prompt, fire sequential, linked questions. You’ll maintain context and sharpen focus. 4. Mirror the LLM’s Own Language. Scan model outputs for phrasing patterns and reflect those idioms back in your prompts. 5. Treat Prompts as Living Docs. Embed metrics in comments: note output quality, error rates, hallucination frequency. Keep iterating until ROI justifies the effort. And finally, the bit no one wants to hear. You get better at using AI by using AI. Practice like you’re training a dragon. Eventually, it listens. And when it does, it’s magic. You now know more about prompt engineering than 98% of LinkedIn. Which means you should probably repost this. Just saying. ♻️
-
Gmail’s AI email assistant writes like a committee of lawyers designed it. Pete Koomen’s recent post Horseless Carriages explains why: developers control the AI prompts instead of users. In his post he argues that software developers should expose the prompts and the user should be able to control it. He inspired me to build my own. I want a system that’s fast, accounts for historical context, & runs locally (because I don’t want my emails to be sent to other servers), & accepts guidance from a locally running voice model. Here’s how it works: 1. I press the keyboard shortcut, F2. 2. I dictate key points of the email. 3. The program finds relevant emails to/from the person I’m writing. 4. The AI generates an email text using my tone, checks the grammar, ensures that proper spacing & paragraphs exist, & formats lists for readability. 5. It pastes the result back. Here are two examples : emailing a colleague, Andy (https://lnkd.in/gtjt3BPp), & a hypothetical founder (https://lnkd.in/gDwM4f22). Instead of generics, the system learns from my actual email history. It knows how I write to investors vs colleagues vs founders because it’s seen thousands of examples. The point isn’t that everyone will build their own email system. It’s that these principles will reshape software design. - Voice dictation feels like briefing an assistant, not programming a machine. - The context layer - that database of previous emails - becomes the most valuable component because it enables true personalization. - Local processing, voice control, & personalized training data could transform any application, not just email, because the software learns from my past uses We’re still in the horseless carriage era of AI applications. The breakthrough will come when software adapts to us instead of forcing us to adapt to it. Centered around a command line email client called Neomutt (https://neomutt.org/). The software hits LanceDB, a vector database with embedded emails & finds the ones that are the most relevant from the sender to match the tone. The code is here (https://lnkd.in/gZ-AaAWa).
-
Anthropic just shipped Skills, Microsoft 365 integration, and enterprise search for Claude. After talking to dozens of enterprise companies this year, I think they're solving the right problems. 💰Context tax is killing enterprise AI adoption. Most AI tools require you to manually gather information before asking useful questions. You're copying emails, uploading documents, explaining organizational context. The AI might be smart, but you're doing all the integration work. Claude's Microsoft 365 connector changes this. Direct access to SharePoint, Outlook, Teams, and OneDrive means the AI already knows what your organization knows. Ask about Q3 strategy, and it pulls from the actual discussions, documents, and decisions. They also launched Skills — reusable instruction bundles that work across Claude's web app, API, and command-line tool. Think of these as expertise packages—instructions, scripts, and resources Claude loads on-demand. And lastly, the new Enterprise search is a shared project that searches multiple connected tools simultaneously. One query pulls information from HR docs in SharePoint, email discussions in Outlook, and team guidelines from various sources—then synthesizes it into a single answer. Model providers like Anthropic and OpenAI are realizing that enterprise AI needs to be operational, not just conversational. Less chatbot, more sidekick that accesses your actual systems and takes action.
-
I've built 67+ AI agents in n8n. At first, I thought adding nodes and optimizing connections was what mattered. But I never really trusted them. Every output felt like a gamble. The bottleneck wasn't my architecture. It was my instructions. Avoid my mistakes and: 1. Separate static facts from inputs. Mixing them makes the agent guess context it should already know. → Example: Static = “Store opens at 9 AM.” Dynamic = “Order ID: 48281.” 2. Make the agent call out missing info. Guessing is the #1 source of silent failures. → Example: MISSING_FIELD: customer_email. 3. Force it to plan before acting. Step-planning stabilizes reasoning and reduces randomness. → Example: Plan internally. Output only the final result. 4. Give a fallback for impossible tasks. Without a fallback, the agent hallucinates a solution. → Example: ERROR_REASON: date_format_invalid. 5. Define “If X → Do Y” rules. Deterministic branching kills unpredictability. → Example: If date can’t be parsed → ask for a new one. 6. Allow creativity only where needed. Uncontrolled creativity = guaranteed hallucinations. → Example: Creative only in “Rewrite.” Everything else literal. 7. Limit the agent’s memory. Too much history makes the agent drift off-task. → Example: Use only the last 2 messages to determine intent. 8. Make it restate the task first. Repetition confirms the agent understood the request correctly. → Example: Task summary: extract the invoice number. 9. Validate inputs before generating outputs. Output built on bad inputs = guaranteed bad outputs. → Example: Invalid date: expected YYYY-MM-DD. 10. Require a termination signal. Your workflow needs a clear signal that the task is complete. → Example: End with “TERMINATE.” 11. Test your instructions with ugly inputs. If it only works on “happy path,” it’s not reliable - it’s lucky. → Example: Missing fields, malformed dates, weird formats. 12. Run a 10–20 sample eval before shipping. You can’t improve what you don’t measure. Vibes ≠ validation. → Example: Score each output: accuracy, format, tone, stability. 13. Iterate based on failures, not feelings. One word in your instructions can double your success rate. → Example: 2 outputs broke the format → tighten output rules. This is how you get from 30% to 80% success rate. Better instructions beat complex architecture. What's been your biggest challenge getting agents to behave consistently?