𝗜𝗳 𝘆𝗼𝘂 𝘄𝗮𝗻𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗮𝗻 𝗔𝗜 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝘆 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝗮𝗻𝘆, 𝘆𝗼𝘂 𝗳𝗶𝗿𝘀𝘁 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝗮 𝘀𝗼𝗹𝗶𝗱 𝗱𝗮𝘁𝗮 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝗻𝗱 𝗲𝗻𝗳𝗼𝗿𝗰𝗲 𝘀𝘁𝗿𝗶𝗰𝘁 𝗱𝗮𝘁𝗮 𝗵𝘆𝗴𝗶𝗲𝗻𝗲. Getting your house in order is the foundation for delivering on any AI ambition. The MIT Technology Review — based on insights from 205 C-level executives and data leaders — lays it out clearly: 𝗠𝗼𝘀𝘁 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗱𝗼 𝗻𝗼𝘁 𝗳𝗮𝗰𝗲 𝗮𝗻 𝗔𝗜 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. 𝗧𝗵𝗲𝘆 𝗳𝗮𝗰𝗲 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗶𝗻 𝗱𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗮𝗻𝗱 𝗿𝗶𝘀𝗸 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁. Therefore, many firms are still stuck in pilots, not production. Changing that requires strong data foundations, scalable architectures, trusted partners, and a shift in how companies think about creating real value with AI. Because pilots are easy, BUT scaling AI across the enterprise is hard. 𝗛𝗲𝗿𝗲 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗸𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: ⬇️ 1. 95% 𝗼𝗳 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗮𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝗔𝗜 — 𝗯𝘂𝘁 76% 𝗮𝗿𝗲 𝘀𝘁𝘂𝗰𝗸 𝗮𝘁 𝗷𝘂𝘀𝘁 1–3 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀: ➜ The gap between ambition and execution is huge. Scaling AI across the full business will define competitive advantage over the next 24 months. 2. 𝗗𝗮𝘁𝗮 𝗾𝘂𝗮𝗹𝗶𝘁𝘆 𝗮𝗻𝗱 𝗹𝗶𝗾𝘂𝗶𝗱𝗶𝘁𝘆 𝗮𝗿𝗲 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗯𝗼𝘁𝘁𝗹𝗲𝗻𝗲𝗰𝗸𝘀: ➜ Without curated, accessible, and trusted data, no AI strategy can succeed — no matter how powerful the models are. 3. 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗮𝗻𝗱 𝗽𝗿𝗶𝘃𝗮𝗰𝘆 𝗮𝗿𝗲 𝘀𝗹𝗼𝘄𝗶𝗻𝗴 𝗔𝗜 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 — 𝗮𝗻𝗱 𝘁𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗴𝗼𝗼𝗱 𝘁𝗵𝗶𝗻𝗴: ➜ 98% of executives say they would rather be safe than first. Trust, not speed, will win in the next AI wave. 4. 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗲𝗱, 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀-𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗔𝗜 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 𝘄𝗶𝗹𝗹 𝗱𝗿𝗶𝘃𝗲 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝘃𝗮𝗹𝘂𝗲: ➜ Generic generative AI (chatbots, text generation) is table stakes. True differentiation will come from custom, domain-specific applications. 5. 𝗟𝗲𝗴𝗮𝗰𝘆 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗮𝗿𝗲 𝗮 𝗺𝗮𝗷𝗼𝗿 𝗱𝗿𝗮𝗴 𝗼𝗻 𝗔𝗜 𝗮𝗺𝗯𝗶𝘁𝗶𝗼𝗻𝘀: ➜ Firms sitting on fragmented, outdated infrastructure are finding that retrofitting AI into legacy systems is often more costly than building new foundations. 6. 𝗖𝗼𝘀𝘁 𝗿𝗲𝗮𝗹𝗶𝘁𝗶𝗲𝘀 𝗮𝗿𝗲 𝗵𝗶𝘁𝘁𝗶𝗻𝗴 𝗵𝗮𝗿𝗱: ➜ From GPUs to energy bills, AI is not cheap — and mid-sized companies face the biggest barriers. Smart firms are building realistic ROI models that go beyond hype. 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮 𝗳𝘂𝘁𝘂𝗿𝗲-𝗿𝗲𝗮𝗱𝘆 𝗔𝗜 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗶𝘀𝗻’𝘁 𝗮𝗯𝗼𝘂𝘁 𝗰𝗵𝗮𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗻𝗲𝘅𝘁 𝗺𝗼𝗱𝗲𝗹 𝗿𝗲𝗹𝗲𝗮𝘀𝗲. 𝗜𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝘀𝗼𝗹𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝗵𝗮𝗿𝗱 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 — 𝗱𝗮𝘁𝗮, 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗴𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲, 𝗮𝗻𝗱 𝗥𝗢𝗜 — 𝘁𝗼𝗱𝗮𝘆.
Navigating AI Competition
Explore top LinkedIn content from expert professionals.
-
-
The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. [...] [Reached length limit. Full text: https://lnkd.in/grbFH4D6 ]
-
Last week, Chinese AI company DeepSeek shocked the AI industry with the release of R1, their open-sourced reasoning model. Yesterday, the stock market noticed too. To help us understand the significance of this technological and geopolitical moment, I’ve co-authored a piece in The Washington Post about DeepSeek and open-source models. DeepSeek-R1, which matches models like OpenAI’s o1 in logic tasks including math and coding, costs only 2% of what OpenAI charges to run, and was built with far fewer resources. And most importantly, it’s an open-source model, meaning that DeepSeek has published the model’s weights, allowing anyone to use them to create and train their own AI models. Up until now, closed-source models like those coming out of American tech companies have been winning the AI race. But my co-author Dhaval Adjodah and I argue in our piece that DeepSeek-R1 should make us question our assumption that closed-source models will necessarily remain dominant. Open-source models may become a key component of the AI ecosystem, and the United States should not cede leadership in this space. As we conclude in our article: “America’s competitive edge has long relied on open science and collaboration across industry, academia and government. We should embrace the possibility that open science might once again fuel American dynamism in the age of AI.” It was a pleasure to collaborate on this article with Dhaval, whose company MakerMaker.AI is on the cutting-edge of AI technology, building AI agents that build AI agents. What do you think about the future of open vs. closed-source AI? Read the full op-ed here: https://lnkd.in/eXK5YdWk.
-
Sure, anybody can call OpenAI APIs to access cutting-edge models, but let’s be real: the true opportunity for businesses isn’t just plugging into those APIs. It’s about leveraging your most unique competitive advantage: your data. Data is the foundation of any successful AI system. Yet, the journey from raw data to actual value has many challenges: 1. Not enough data? Your model can’t be generalized. 2. Poor-quality data? Expect poor-quality results. 3. Nonrepresentative data? Say hello to biased predictions. 4. Too many irrelevant features? You’re adding noise, not value. 5. Not enough diversity? Your model won’t be robust. Garbage in, garbage out. Even the most advanced model is only as good as the data it learns from. For businesses, the opportunity lies in building data pipelines tailored to their unique context — clean, representative, and enriched with meaningful features. This is how you create an AI that’s not just smart, but aligned with your business goals. The frontier isn’t just in using AI. It’s in using AI to transform your data into a moat your competitors can’t cross.
-
After months of rumors, OpenAI finally made its play to own the browser, the most coveted chokepoint in a user’s digital life. Atlas is a web browser with a built-in ChatGPT sidebar that reads, summarizes, compares, and rewrites pages. Agents can execute multi-step tasks like travel research or shopping, moving across sites with user permission. The browser has long been the front door to the internet - and, for Google, the key to its kingdom. Chrome dominates global market share, gathering user data for ad targeting and funneling traffic into Search, Google's profit engine. Weeks ago, Google infused Gemini directly into Chrome, collapsing its assistant into the same layer. Now, OpenAI wants a slice of that gateway. It’s an all-out race for the interface of intent. The trillion dollar question is: Can OpenAI - with 800M weekly active users and unmatched cultural mindshare - convince people to switch from Chrome? Or will Chrome’s entrenched defaults keep Atlas a sideshow? 2 years ago, Google was seen as the company that wrote the transformers paper but failed to capitalize on it - the giant that blew its lead. That story’s been rewritten. Today, Google’s models are as good or better, often cheaper. Case in point: OpenAI’s Sora 2 launched to massive fanfare… a week later, Veo 3.1 quietly took the top spot. Still, narrative matters. While Google may be back on top technically, OpenAI still owns the story. Nobody markets innovation with more drama. This will be an interesting match-up. Honorable mention: Perplexity - their taste and execution are elite. They pioneered a UX with citations and follow-on questions, embedded checkout in chat, and were first to market with their AI-native browser, Comet. Their Achilles’ heel? Distribution. Every feature they ship gets copied in weeks. It’s a constant paddle-to-stay-afloat game against giants who have reach baked in. Then there’s Apple. Rumors swirl of a Safari overhaul, but if their pace with Siri is any indication, the race may be over before they enter the arena. Zoom out and this whole fight is less about browsers, more about collapse - not in the doomer sense, but in the “everything’s merging” sense. Assistants, operating systems, and browsers were once distinct. Now they’re fusing. The assistant lives in the browser. The browser behaves like an OS. The OS politely steps aside. What remains is a persistent digital self - context-rich, portable, adaptive. When Jobs unveiled the iPhone in ’07, he said: “An iPod. A phone. An Internet communicator. Are you getting it? These are not three separate devices.” 18 years later, it’s happening again. Only this time, it’s software collapsing into something new: a digital twin that travels with you across tools, devices, and contexts, orchestrating your life. The user interface dissolves. What’s left is the relationship between you and the intelligence that knows you. That’s why this is such a big deal.
-
My biggest fear as an AI startup founder? Getting crushed by giants before proving our value. 6 counterintuitive strategies that helped CrewAI win against better-funded competitors: When I started CrewAI, we faced tech giants with unlimited resources and VC-backed startups with massive teams. I was just a Brazilian developer with an open-source project. Today, we power 50M+ agents monthly and partner with IBM, Cloudera, PwC, and NVIDIA. 1. Turn "small" into speed While others debated in meetings, we shipped product. Our size became our superpower - we could experiment faster than anyone else. 2. Build in public, strategically We shared every win and lesson learned. This wasn't about transparency. It was about creating a movement people wanted to join. Our community became our strongest evangelists. 3. Education drives adoption Two courses with Andrew Ng on Deeplearning.[ai] changed everything. Instead of pushing features, we taught AI agent orchestration. Our customers became champions because they truly understood the value. 4. Focus on tomorrow's problems We looked 3-5 years ahead: Companies will deploy thousands of AI agents. They'll need ways to manage this complexity. While others chase today's features, we're building the control plane for the agentic future. 5. Be a partner, not a vendor Enterprise leaders don't want another tool. They want partners who share their vision for AI transformation. This mindset attracted IBM and PwC as partners. 6. Let competition fuel growth Each new competitor made us stronger: • Their presence validated our market • Their size made us more agile • Their complexity highlighted our simplicity The key insight? Today's AI winners aren't just building tools. They're preparing for what's next. Soon, every enterprise will run hundreds of AI agents handling sales, support, content, and analytics. How will you manage them all? That's why we built CrewAI - tomorrow's AI infrastructure to help enterprises orchestrate agents, ensure compliance, and scale securely. Want to future-proof your AI strategy? DM me or follow @joaomdmoura for insights on the agentic future. ⚡
-
Trump wants 15% of NVIDIA's China revenue. Beijing wants zero dependence on American chips. DeepSeek now trains on Huawei hardware. Alibaba built its own AI processor. The real challenge for NVIDIA isn't Washington. It's irrelevance. The chip containment strategy isn't working. For most Chinese companies, switching from NVIDIA still means accepting worse performance. But that's changing. Once you combine software breakthroughs with local hardware, the gap shrinks fast. DeepSeek shocked everyone with R1, achieving OpenAI performance at a fraction of the cost through algorithmic innovations. Now they're moving to Huawei chips for R2, showing the hybrid approach works. The numbers tell the real story: China produces 23,695 AI papers annually vs America's 6,378. They file 35,423 AI patents vs 2,678 from US, UK, Canada, Japan, and South Korea combined. Half the world's AI researchers are in China, creating most leading open-source models. To compete, America needs to invest in fundamentals, not restrictions. Quantum computing, nuclear-powered data centers, attracting global talent. These take decades, not election cycles. DeepSeek's shift to Huawei isn't just one company's decision. It's a preview. Alibaba's new chip works with NVIDIA's CUDA platform today, but that's transitional. Cambricon's revenue hit $247 million last quarter on domestic demand alone. Their market cap exceeds $87 billion despite warnings about "irrational exuberance." When chips are "good enough" and software is clever enough, dependence becomes choice. Jensen Huang said it best: "To win the AI race, U.S. industry must earn the support of developers everywhere, including China." He estimates China's AI market at $50 billion this year, growing 50% annually. Trump wants 15% of that. Beijing wants 0% dependence. When you block the front door, innovation finds the back window. TAKEAWAY Getting to technological supremacy is the promised land for superpowers. Washington wants quick wins, usually through restrictions that backfire. China isn't trying to match NVIDIA anymore. They're changing what "good enough" means. When half the world's AI researchers decide Huawei chips running clever algorithms IS good enough, being "the best" becomes irrelevant. America knew the fundamentals playbook once. Quantum computing, nuclear-powered data centers, attracting global talent. These take decades, not election cycles. But we're debating export controls while they're shipping products. P.S. The biggest problem with export controls is their reverse network effect. The more restrictions you add, the faster alternatives develop. When "good enough" becomes the new standard, being the best becomes irrelevant. (See my first comment for why this pattern was inevitable...)
-
Two months ago, the consensus was that Apple had "lost" the AI race. The narrative was that their lack of a frontier model was a failure of innovation. On CNBC in November last year, I argued the opposite: Apple’s silence was not weakness. It was discipline. While competitors were locking themselves into massive capital expenditure cycles to build intelligence, Apple was waiting for the market to mature enough to buy it. As I noted in this clip: "The companies racing ahead on AI may be running faster...but Apple is the only one not running into a margin trap." Last week's news that Apple will license Gemini for ~$1B validates that strategy. They effectively swapped tens of billions in CapEx risk for a predictable, fixed-cost OpEx line item. They did not lose the race. They just refused to run a race that did not make economic sense. Tomorrow, I am publishing a full breakdown of this new dynamic, which is a concept I call "Reverse TAC"...and why the Apple-Google deal marks the end of the "Training Era" and the beginning of the "Inference Economy." Start with the clip below. The math drops tomorrow. #Apple #Google #AI #InferenceEconomics #Strategy #TechInvesting
-
The AI landscape has rapidly evolved beyond just large language models. Today’s systems rely on a wide range of foundational model types—each designed for specific modalities, tasks, and constraints. This visual covers 12 foundational AI models and their core workflows. This is intended for engineers, researchers, and builders who want a structured view of the ecosystem. Here’s a breakdown of what’s included: → LLM (Large Language Models) – GPT, LLaMA Trained using transformer architecture to generate coherent, human-like text. The workflow involves data collection, tokenization, pattern learning, fine-tuning, and deployment. → SLM (Small Language Models) – Phi, TinyLLaMA Lightweight and efficient for on-device or low-resource environments. Focuses on model compression, compact training, and benchmarking. → VLM (Vision-Language Models) – CLIP, Flamingo Learns joint understanding between images and text. Ideal for tasks like image captioning and visual QA. → MLLM (Multimodal Large Language Models) – Gemini Designed to process and align multiple modalities such as text, image, audio, and video. → LAM (Large Action Models) – RT-2, InstructDiffusion Generates sequences of executable actions using behavioral and reinforcement learning data. → LRM (Large Reasoning Models) – DeepSeek-R1 Structured for tool use, chain-of-thought reasoning, and test-time modularity in logic-heavy tasks. → MoE (Mixture of Experts) – Mixtral Activates a subset of specialized models per input to reduce computation cost and improve performance. → SSM (State Space Models) – Mamba, RetNet Efficient at long-context sequence modeling using dynamic systems and parallelism. → RNN (Recurrent Neural Networks) – LSTM, GRU Uses hidden states to process time-dependent data, maintaining memory across input sequences. → CNN (Convolutional Neural Networks) – EfficientNet Learns spatial patterns in image data via convolution layers, pooling, and hierarchical stacking. → SAM (Segment Anything Model) – Meta Segments objects from images based on prompts (text, points, or boxes), making it useful for dynamic image understanding. → LNN (Liquid Neural Networks) – LFMs Leverages differential equations to adapt in real-time, supporting applications in time-sensitive environments. This chart is designed to help you understand not just what these models are, but how they work under the hood. If you're working in AI, this foundational understanding is crucial for making informed architectural decisions.
-
As I pause to absorb the conversations, perspectives, and energy from the recent MTN Group Leadership Gathering, I’ve found myself reflecting deeply on the evolving role of AI across our business and the 16 markets we serve. What follows are some of my personal reflections—shaped by the insights, challenges, and possibilities that surfaced when our leadership came together under one roof. 1. The real threat of the AI era is not disruption — it is delay. Every major technological shift has reshaped economic leadership. AI is doing so faster than any before it — and hesitation now carries exponential cost. 2. AI is not a layer we add — it is a capability we engineer into the enterprise. Its real power emerges when intelligence is embedded across networks, operations, customer platforms, and decision engines. This is not about isolated tools, but about creating a connected, learning digital nervous system. 3. Data is no longer exhaust — it is economic capital. With nearly 94% of the world’s data still untapped, those who activate it will build the strongest data moat of the future. 4. Competitive advantage will be defined by intelligence velocity. Organizations that learn faster consistently outcompete those that merely grow bigger. 5. Africa stands at a rare leapfrogging moment. Generative AI alone represents an estimated $100 billion annual opportunity — a chance to reset growth trajectories rather than incrementally improve them. 6. Compute sovereignty is economic sovereignty. High-performance AI data centers are the factories of the modern age. Without local compute, nations become consumers of intelligence instead of producers of it. 7. Open-source LLMs have democratized intelligence – platforms will unlock its value. Access to advanced models is no longer the barrier. The real differentiator is the ability to integrate, secure, govern, and scale them across enterprise systems through standardized architectures. 8. Impact comes from embedding AI into core operations. From network optimization and fraud prevention to customer experience and supply-chain orchestration, value is realized when AI becomes part of everyday decision flows — not when it remains confined to pilots. 9. Real value beats experimentation theater. From biometric livestock identification reducing theft by up to 90% to national digital registries creating tens of thousands of jobs, AI proves its worth when applied at scale. 10. Today’s AI decisions will shape decades of competitiveness. Our role is to architect platforms that scale intelligence responsibly, cultivate talent that can sustain innovation, and ensure technology becomes a durable source of competitive advantage for decades to come.