AI is becoming a make-or-break factor for banks. But success will not depend on their ability to offer #AI, but on their competence in integrating it. Let’s take a look. Banking is forecasted to feel the biggest impact from generative AI among sectors and industries as a percentage of their revenues with the additional value calculated between $200 bn and $340 bn annually (source: McKinsey). But why is the impact so powerful? One of the main reasons is because the abrupt surge of gen AI is exponentially increasing the speed with which #banking is being transformed. That is not to say that the transformation has started with or due to AI. On the contrary: during the past 10 to 15 years banking was already in the middle of transforming from a human-based, relationship-first industry to a more automated and technology-driven business following the #fintech revolution and the ascend of nimbler and more innovative competitors. But AI now does 2 things: — It brings the transition to a new level, across 3 dimensions: speed, outcome and impact. — It turbo-charges one of the biggest challenges in modern FS: the combination of AI and data that brings under the same roof two inherently opposing forces: mass and customization. In other words, AI seems to find a credible answer to achieving hyper-personalization. In a recent report Deloitte has provided realistic examples on how this is done across both cost efficiency and income growth: Cost efficiency: — Workforce acceleration efficiencies across the board: 0–15% of total staff cost — IT development and maintenance acceleration: 10–20% of IT staff cost — Improved credit-risk assessment leading to 10-15% savings in impairment charges — Improved FinCrime/fraud detection reducing litigation/redress charges and fraud losses Income growth: — Next generation market analysis / predictive trading algorithms: 5–7% uplift on trading income — Improved customer retention: 1–2% uplift on fees & commissions — Improved customer acquisition through hyper-personalised marketing: 5-10% uplift from interest income and fees & commissions — Tailored loan pricing based on credit risk assessment: 2–3% increase on net interest income Despite all the excitement around these estimated benefits, success will not be a walk in the park. It will depend on the banks’ ability to integrate AI in a seamless way into their day-to-day operations. Going forward AI will be re-writing much of the scenarios and use cases of the banking value chain. That doesn’t necessarily mean that they will all be different, but most will certainly be enhanced with impact spanning both across the back-end and the front-end. Given that resources are limited, one of the main challenges will be how to identify the ones to focus on. Factors such as #strategy, potential impact and a match with the existing skillset should be guiding the selection process. Opinions: my own, Graphic source and use cases: Deloitte
AI's Impact on Business
Explore top LinkedIn content from expert professionals.
-
-
This week at Fortune Brainstorm Tech, I sat down with leaders actually responsible for implementing AI at scale - Deloitte, Blackstone, Amex, Nike, Salesforce, and more. The headlines on AI adoption are usually surveys or arm-wavy anecdotes. The reality is far messier, far more technical, and - if you dig into details - full of patterns worth stealing. A few that stood out: (1) Problem > Platform AI adoption stalls when it’s framed as “we need more AI.” It works when scoped to a bounded business problem with measurable P&L impact. Deloitte's CTO admitted their first wave fizzled until they reframed around ROI-tied use cases. ➡️ Anchor every AI proposal in the metric you’ll move - not the model you’ll use. (2) Fix the Plumbing Every failed rollout traced back to weak foundations. American Express launched a knowledge assistant that collapsed under messy data - forcing a rebuild of their data layer. Painful, but it created cover to invest in infrastructure that lacked a flashy ROI. Today, thousands of travel counselors across 19 markets use AI daily - possible only because of that reset. ➡️ Treat data foundations as first-class citizens. If you’re still deferring middleware spend, AI will expose that gap brutally. (3) Centralize Governance, Decentralize Application Nike’s journey is a case study: Phase 1: centralized team → clean infra, no traction. Phase 2: federated into business-line teams → every project tied to outcomes → traction unlocked. The pattern is consistent: centralize standards, infra, and security; decentralize use-case development. If you only push from the top, you have a fast start but shallow impact. Only bottom-up ownership gives depth. ➡️ You can’t scale AI from a lab. It has to live where the business pain lives. (4) Humans are harder than the Tech Leaders agreed: the “AI story” is really a people story. Fear of job loss slows adoption. ➡️ Frame AI as augmentation, not replacement. Culture change is the real rollout plan. (5) Board Buy-In: Blessing and Burden Boards are terrified of being left behind. Upside: funding and prioritization. Downside: unrealistic timelines and a “go faster” drumbeat. Leaders who navigated best used board energy to unlock investment in cross-functional data/security initiatives. ➡️ Harness board FOMO as cover to fund the unsexy essentials. Don’t let it push you into AI theater. (6) Success ≠ Moonshot, Failure ≠ Fatal. - Blackstone's biggest win: micro-apps that save investors 1–2 hours/day. Not glamorous, but high ROI. - Nike's biggest miss: an immersive AI Olympic shoe designer - fun demo, no scale. Incremental productivity gains compound. Moonshots inspire headlines, but rarely deliver durable value. ➡️ Bank small wins. They build credibility and capacity for bigger bets. In enterprise AI, the model is the easy part. The hard part - and the difference between demo and value - is framing the right problem, building the data plumbing, designing the org, and bringing people along.
-
𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗯𝗿𝗶𝗻𝗴𝘀 𝘁𝗵𝗲 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝘀𝗵𝗶𝗳𝘁 𝗳𝗼𝗿 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗱𝗲𝗰𝗮𝗱𝗲𝘀 — 𝗮𝗻𝗱 𝗺𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗿𝗲𝗮𝗱𝘆: ⬇️ At IBM, we just released a new report showing how agentic AI is hitting core business functions, such as Finance, HR, Procurement, Order-to-Cash, Customer Service, and Sales Support. AI-Agent will create a completely new operating model within companies: They will across global workflows execute, escalate, and optimize. Here are six key findings of the report which stood out to me: ⬇️ 1. 𝗧𝗼𝘂𝗰𝗵𝗹𝗲𝘀𝘀 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝗮𝗿𝗲 𝗻𝗼 𝗹𝗼𝗻𝗴𝗲𝗿 𝗮 𝘃𝗶𝘀𝗶𝗼𝗻 — 𝘁𝗵𝗲𝘆’𝗿𝗲 𝗯𝗲𝗶𝗻𝗴 𝘀𝗰𝗮𝗹𝗲𝗱 ➜ By 2027, 85% of execs expect agentic systems to run major parts of operations — 24x7, touchless, and outcome-driven. 2. 𝗧𝗵𝗲 𝗵𝘂𝗺𝗮𝗻 𝗿𝗼𝗹𝗲 𝗶𝘀 𝘀𝗵𝗶𝗳𝘁𝗶𝗻𝗴 — 𝗳𝗿𝗼𝗺 𝗱𝗼𝗶𝗻𝗴 𝘁𝗼 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗻𝗴 ➜ Employees will no longer “execute the process.” They’ll manage outcomes, monitor agents, and handle complex exceptions. The paper calls this “digital labor management” — and it’s becoming probably a new profession. 3. 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗔𝗜 𝗶𝘀𝗻’𝘁 𝗮 𝘁𝗼𝗼𝗹 — 𝗶𝘁’𝘀 𝗮 𝗻𝗲𝘄 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝗹 ➜ Agents aren’t RPA bots or just deterministic workflows. Agents adapt, self-correct, and collaborate. They make decisions, route exceptions, and personalize interactions — with minimal oversight. In the future, they will be connected in multi-agent workflows and build new ecosystems within companies based on a new operating model. 4. 𝗛𝘂𝗺𝗮𝗻𝘀 𝗿𝗲𝗺𝗮𝗶𝗻 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗳𝗼𝗿 𝗰𝗼𝗻𝘁𝗲𝘅𝘁, 𝗰𝗿𝗲𝗮𝘁𝗶𝘃𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 ➜ AI agents execute — but it’s people who set direction, define ethical boundaries, and bring empathy to decisions. In the next operating model, human oversight is what makes AI work responsibly. 5. 𝗧𝗲𝗰𝗵 𝗮𝗹𝗼𝗻𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗲𝗻𝗼𝘂𝗴𝗵 ➜ 74% of execs cite skills gaps as their biggest blocker. Governance, data architecture, and identity management for AI agents must now be treated as core enterprise capabilities. 6. 𝗕𝗲𝗶𝗻𝗴 𝗿𝗲𝗮𝗱𝘆 𝗺𝗲𝗮𝗻𝘀 𝗿𝗲𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗳𝗼𝗿 𝗮𝗴𝗲𝗻𝘁 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 ➜ Real-time feedback loops, persistent memory, inter-agent coordination, and outcome governance aren’t nice-to-haves. They’re the foundation for scaling enterprise-grade agent systems. If you’re building an AI-enabled operations function — this is essential reading. Full report and commentary in the comments. Enjoy!
-
I had lunch with a founder last week who pitched me on their "AI for operations" platform. I stopped them 3 slides in. General-purpose AI isn’t cutting it anymore. DeepSeek’s January breakthrough told us something important: efficiency & performance can coexist a lot earlier than most people thought. Startups are now excelling not by scale but by focus: they’re building vertical AI that deeply understands the messy, high-stakes workflows in sectors like healthcare, finance, and defense. Specialization is the new competitive advantage. 3 patterns I’m tracking across successful vertical AI startups: First, they pick massive but high-friction and high-value workflows. “AI for sales” or “AI for operations” is too broad. What’s effective is focusing on urgent, complex processes, like: ConverzAI streamlining high-volume recruiting for staffing agencies Tennr automating messy admin work Second, they build more than model wrappers. They create proprietary feedback loops and data assets that compound over time. This instrumentation is what turns a one-off tool into a durable, defensible product. Third, they expand from beachheads of earned trust. They wedge into multi-billion-dollar industries by solving problems in the hardest, least glamorous corners. From there they earn the right to expand and unlock bigger TAM over time. Choose one gnarly high-value workflow and go deep. Otherwise you might get stopped three slides in too.
-
The 7 Layers of the LLM Stack — A Complete Map for Building with AI When most people think of Large Language Models (LLMs), they picture just the model (like GPT, LLaMA, or Claude). But in reality, an entire stack of 7 interconnected layers is what makes enterprise-grade AI systems possible. Here’s how the stack unfolds: 🔴 Layer 1 – Data Sources & Acquisition Everything begins with data pipelines. Web scraping, APIs, enterprise systems, logs, documents, IoT sensors — this is the raw material. Without diverse, high-quality data, everything above it crumbles. 🔵 Layer 2 – Data Preprocessing & Management -Raw data is rarely usable. This layer handles cleaning, normalization, chunking, embeddings, governance, and secure storage. Think of it as turning unstructured chaos into structured knowledge. 🟡 Layer 3 – Model Selection & Training This is where the AI “brain” is formed: -Choosing foundation models (GPT-4, LLaMA, etc.) -Fine-tuning with LoRA/QLoRA -Adding safety layers, distillation, and multimodal prep -RLHF/RLAIF for alignment It’s where raw capability is transformed into fit-for-purpose intelligence. 🟣 Layer 4 – Orchestration & Pipelines Models don’t live in isolation. They need agents, memory, planning, guardrails, and workflows (LangChain, CrewAI, Airflow). This layer ensures your AI can interact with tools, APIs, and other agents in a safe, repeatable, and scalable way. 🟠 Layer 5 – Inference & Execution The “runtime engine.” It covers real-time/batch inference, caching, rate limiting, multimodal support, determinism controls, and safety filters. This is what keeps systems both fast and reliable. 🔵 Layer 6 – Integration Layer How does AI connect with the rest of the business? Through APIs, SDKs, connectors (Slack, Salesforce, Jira), identity/auth, billing, and event buses. This is what makes AI plug-and-play across enterprise ecosystems. 🔴 Layer 7 – Application Layer Finally, the visible part: copilots, chatbots, RAG apps, workflow automation, forecasting, domain-specific agents (healthcare, legal, support). This is where end-users experience the value. The key insight: LLMs are not standalone magic. They’re part of a layered architecture where each layer adds stability, trust, and scalability. Skip a layer, and your AI solution risks collapsing under real-world demands. For builders, leaders, and enterprises — knowing where you sit in this stack clarifies: What to build yourself vs. integrate, Where to invest for differentiation, And how to future-proof as the ecosystem evolves.
-
𝗕𝗹𝗮𝗰𝗸𝗥𝗼𝗰𝗸 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀 𝗗𝗲𝘃𝗲𝗹𝗼𝗽 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗦𝘆𝘀𝘁𝗲𝗺 𝗳𝗼𝗿 𝗦𝘁𝗼𝗰𝗸 𝗣𝗶𝗰𝗸𝘀 Instead of relying on one frontier model, BlackRock built three AI “agents” that mimic different analyst roles: • Fundamental Agent — parses 10-Ks and earnings reports • Sentiment Agent — reviews news and analyst ratings • Valuation Agent — studies prices, volatility, and volumes Each agent analyzes a stock independently, then enters a round-robin debate. Disagreements are argued until the agents reach consensus on whether to BUY or SELL — a process designed to mimic an investment committee. The system runs on Microsoft's AutoGen framework using GPT-4o, with custom tools for each agent: document parsing for 10-Ks, news summarization, and volatility calculators. The agents' recommendations change based on risk tolerance settings. The same volatile stock might get a SELL from a risk-averse agent but a BUY from a risk-neutral one analyzing identical data. Tested on 15 tech stocks over four months in 2024, the system outperformed both single agents and the benchmark in risk-neutral portfolios on a risk-adjusted basis (Sharpe ratios). In risk-averse portfolios, all approaches lagged the benchmark — since volatile tech names were excluded — but the multi-agent showed smaller drawdowns than single agents. The authors argue this setup improves analytical rigor and helps mitigate behavioral biases like overconfidence. While limited in scope and not a full portfolio optimizer, the study suggests specialized, debating agents may prove more reliable than general models for quantitative finance. Takeaway: LLMs can be a Swiss Army knife in daily life, but for mathematical analysis, specialized agents may be the sharper tool. Paper: 𝘈𝘭𝘱𝘩𝘢𝘈𝘨𝘦𝘯𝘵𝘴: 𝘓𝘢𝘳𝘨𝘦 𝘓𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘔𝘰𝘥𝘦𝘭 𝘣𝘢𝘴𝘦𝘥 𝘔𝘶𝘭𝘵𝘪-𝘈𝘨𝘦𝘯𝘵𝘴 𝘧𝘰𝘳 𝘌𝘲𝘶𝘪𝘵𝘺 𝘗𝘰𝘳𝘵𝘧𝘰𝘭𝘪𝘰 𝘊𝘰𝘯𝘴𝘵𝘳𝘶𝘤𝘵𝘪𝘰𝘯𝘴 Links below for more info. Tianjiao (Tina) Z., Jingrao Lyu, Stokes Jones, Harrison Garber, Stefano Pasquali, Dhagash Mehta, Ph.D.
-
I did my PhD on AI and copyright - and I said this would happen. On 13 February, the Munich District Court dismissed a copyright claim over three logos generated by artificial intelligence, holding that the plaintiff’s prompts, however detailed and iterative, did not make him the author of the resulting images. The reasoning was grounded in the harmonised EU concept of a “work” as developed by the Court of Justice: copyright protects original intellectual creations that reflect the personality of their human author through free and creative choices. Giving instructions to an AI, the court found, is closer to commissioning a designer than to creating a work. The decision was unsurprising. Across the EU, copyright law is deeply anthropocentric. French law protects “works of the mind.” Italian law requires the “creative character” of the author. The CJEU’s originality standard demands a human intellectual creation, reflecting the author’s personality by “free and creative choices”. The US has reached a similar position: the Copyright Office and Federal Courts hold that prompting alone cannot ground a claim to authorship - AI generated works are not copyrightable. Ireland, however, occupies an unusual position. Section 21(f) of the Copyright and Related Rights Act 2000 provides that, in the case of a computer-generated work, the author is “the person by whom the arrangements necessary for the creation of the work are undertaken.” Ironically for a piece of copyright legislation, we copied this provision from the UK Copyright, Designs and Patents Act 1988, drafted well before generative AI existed. While AI was mentioned by the House of Lords when passing it, the tech anticipated bore no resemblance to the AI now producing text, images, and code at scale. The provision has never been tested in court. Yet it remains on the Irish statute book, creating a framework under which AI-generated outputs could attract copyright protection even where no human creative choice shaped the expressive content. That sits uncomfortably alongside the EU’s CJEU harmonised originality standard, which, as the Munich court confirmed, requires human creative influence to be objectively identifiable in the final output. The UK is no longer bound by EU copyright harmonisation - Ireland is - and we no longer have the legislative weight of the UK behind us. Whether section 21(f) can be reconciled with the CJEU’s originality jurisprudence is a question policymakers should address before a court is forced to. In 2024, the AI Advisory Council published its paper on the impact of AI on the creative sector, which I chaired. That paper recommended the Government reconsider this provision in light of Ireland’s EU obligations. The Munich ruling underlines that recommendation. As Ireland prepares to assume the EU Presidency later this year, it has both the opportunity and the credibility to lead on AI copyright reform. Our own regime would be a good place to start.
-
AI is going to provide yet another turboboost to SMB tech. A long time ago, in the days before SaaS, the idea of starting a software company that sells to SMBs was almost unfundable by VCs. There were notable outliers like Intuit. But in general, the consensus was that the cost of acquiring and servicing SMB clients didn’t foot out and the complexity wouldn’t scale. As such, many of the vendors that sold to SMBs were relatively small and regional. SaaS unlocked a goldmine, with massive success stories like Shopify, Hubspot, Toast, Klaviyo and ServiceTitan, along with 100s more. Vendors could cost effectively acquire customers through inside sales and Product-Led Growth (PLG) channels. And they could service them through digital #CustomerSuccess methodologies. Meanwhile, clients no longer had to host and manage the software itself - an arduous task for a small business. As much of a boost as SaaS gave to SMB tech, AI is going to take it many steps further. The “last mile” of delivering value in SaaS is still dependent on the end customer’s ability to leverage the software. Customer Success Managers and digital adoption can help, but ultimately, success often hinges on the skillset of the SMB. This is problematic, because most small businesses are lean and mission-oriented. Running software is often low on their priority list. But what if they don’t have to run the software? With AI agents, vendors can move, as VC Sarah Tavel famously coined, “from selling software to selling work”: * Marketing platforms can give SMBs what they want - leads - and handle the rest automatically (e.g., campaign design, optimization, etc.) * Recruiting tools can move from Applicant Tracking Systems to sourcing and screening hires for small businesses. * Service products can go from tracking customer inquiries to handling inbound calls and emails automatically. In the process, these firms can expand their Average Selling Prices (ASPs) and therefore their Total Addressable Market (TAM) radically, since they are delivering far more value to the end customer. If you’re in SMB tech, the good times are just getting started.
-
I recently had the opportunity to join Bain & Company's Winning with AI podcast to discuss how agentic AI is reshaping enterprise software and, more broadly, the economics of value creation. Over the past 25 years at Vista Equity Partners, we have navigated several structural shifts in technology, from on-premise to cloud to SaaS. Each required not only technical change but business model evolution. Agentic AI represents another such inflection point, with the potential to materially expand productivity, total addressable markets and long-term enterprise value. Today, across our portfolio of more than 90 enterprise software companies, we are focused on embedding AI into complex workflows, redesigning operating models and evolving pricing frameworks to reflect value delivered, not simply seats provisioned. Through our Agentic AI Factory, we are applying a disciplined, process-driven approach to help companies deploy AI agents responsibly and at scale. Sustainable advantage will not come from experimentation alone. It will come from leadership, organizational design and the willingness to evolve ahead of the market. I appreciated the thoughtful discussion and the chance to share our perspective. Listen to the full episode here: https://bit.ly/40GUQxI
Robert F. Smith on AI & Enterprise Software | Vista Equity Partners
https://www.youtube.com/
-
OpenAI just quit the booking business. Expedia jumped 12%. Booking Holdings up 8%. The headline everyone ran: AI won't replace OTAs. Wrong read. OpenAI didn't fail. They confirmed exactly where generative AI hits its ceiling in travel. And where agentic AI will need to pick up. Google tried owning bookings from 2015 to 2022. "Book on Google." Killed it. OpenAI tried the same play. Same outcome. The pattern is clear. Generative AI captures discovery, not transactions. Agentic AI might close that gap. But only with the right data underneath. ChatGPT usage for travel research grew 124% in one year. But only 2% of travelers trust AI to book without human oversight. People browse inside AI. They buy through brands they trust. McKinsey calls this "selective delegation." From the new Skift + McKinsey report on agentic AI in travel. So the real question isn't who builds the best agent. It's whose data is clean, real-time, and structured enough for those agents to work with. Today, fewer than 15% of brands are built to show up in AI-generated answers. In hospitality, that number is probably lower. The companies that will define the next era of travel distribution aren't building agents. They're building the data layer agents depend on. Everyone asks, "When will AI book my trip?" Better question: Whose data will the agent trust enough to recommend?