<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
  <channel>
    <title>The Dataiku Blog</title>
    <link>https://www.dataiku.com/stories/blog</link>
    <description>Discover how Dataiku empowers teams across industries to leverage AI, enhance efficiency, and unlock insights through innovative solutions and robust capabilities.</description>
    <language>en-us</language>
    <pubDate>Thu, 16 Apr 2026 18:58:10 GMT</pubDate>
    <dc:date>2026-04-16T18:58:10Z</dc:date>
    <dc:language>en-us</dc:language>
    <item>
      <title>Practical transformation: how Dataiku is modernizing the actuarial workflow</title>
      <link>https://www.dataiku.com/stories/blog/how-dataiku-is-modernizing-the-actuarial-workflow</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/how-dataiku-is-modernizing-the-actuarial-workflow" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/cropped_social_ready.jpg" alt="office buildings and people" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/how-dataiku-is-modernizing-the-actuarial-workflow" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/cropped_social_ready.jpg" alt="office buildings and people" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fhow-dataiku-is-modernizing-the-actuarial-workflow&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI Governance &amp; Architecture</category>
      <category>Use Cases</category>
      <category>Financial Services &amp; Insurance</category>
      <pubDate>Thu, 16 Apr 2026 18:58:10 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/how-dataiku-is-modernizing-the-actuarial-workflow</guid>
      <dc:date>2026-04-16T18:58:10Z</dc:date>
      <dc:creator>John McCambridge, Abi Edwards</dc:creator>
    </item>
    <item>
      <title>Recursive AI: when models start managing their own context</title>
      <link>https://www.dataiku.com/stories/blog/recursive-ai</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/recursive-ai" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/resized_image-1.jpg" alt="circle loop shape architecture" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;There's a bottleneck at the heart of how large language models process information, and it hasn't gone away despite the rapid expansion of context windows. Models receive a prompt, process it in a single forward pass, and generate a response. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;Everything the model knows about a problem must fit within that prompt: the documents, the conversation history, the instructions, the examples. &lt;/span&gt;&lt;a href="https://www.dataiku.com/stories/blog/context-engineering" style="font-size: 18px; font-weight: 400;"&gt;&lt;u&gt;&lt;span style="color: #1155cc;"&gt;The context window&lt;/span&gt;&lt;/u&gt;&lt;/a&gt; is both the model's entire view of the world and its only working space. Expand it as much as you like; the fundamental architecture remains the same.&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;This creates predictable failure modes. When context is sparse, models perform well. As it fills up, performance degrades in ways that are hard to predict and diagnose. Relevant information gets lost in the middle. Contradictions go unresolved. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;The model's attention, metaphorically speaking, is spread too thin. AI researchers have a colloquial term for what happens when you push too much into a context: context rot. The outputs don't catastrophically fail; they just get progressively worse in ways that are easy to miss until they become impossible to ignore.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;A research direction that has attracted serious interest proposes a different architectural approach: Rather than loading information into the model, let the model navigate to the information it needs. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;These are sometimes called recursive language models, and while they remain experimental in their most ambitious forms, the principles behind them are already influencing how production &lt;/span&gt;&lt;a href="https://www.dataiku.com/stories/blog/agent-memory"&gt;&lt;u&gt;&lt;span style="color: #1155cc;"&gt;AI systems&lt;/span&gt;&lt;/u&gt;&lt;/a&gt;&lt;span&gt; are designed.&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/recursive-ai" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/resized_image-1.jpg" alt="circle loop shape architecture" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;There's a bottleneck at the heart of how large language models process information, and it hasn't gone away despite the rapid expansion of context windows. Models receive a prompt, process it in a single forward pass, and generate a response. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;Everything the model knows about a problem must fit within that prompt: the documents, the conversation history, the instructions, the examples. &lt;/span&gt;&lt;a href="https://www.dataiku.com/stories/blog/context-engineering" style="font-size: 18px; font-weight: 400;"&gt;&lt;u&gt;&lt;span style="color: #1155cc;"&gt;The context window&lt;/span&gt;&lt;/u&gt;&lt;/a&gt; is both the model's entire view of the world and its only working space. Expand it as much as you like; the fundamental architecture remains the same.&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;This creates predictable failure modes. When context is sparse, models perform well. As it fills up, performance degrades in ways that are hard to predict and diagnose. Relevant information gets lost in the middle. Contradictions go unresolved. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;The model's attention, metaphorically speaking, is spread too thin. AI researchers have a colloquial term for what happens when you push too much into a context: context rot. The outputs don't catastrophically fail; they just get progressively worse in ways that are easy to miss until they become impossible to ignore.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;A research direction that has attracted serious interest proposes a different architectural approach: Rather than loading information into the model, let the model navigate to the information it needs. &lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;These are sometimes called recursive language models, and while they remain experimental in their most ambitious forms, the principles behind them are already influencing how production &lt;/span&gt;&lt;a href="https://www.dataiku.com/stories/blog/agent-memory"&gt;&lt;u&gt;&lt;span style="color: #1155cc;"&gt;AI systems&lt;/span&gt;&lt;/u&gt;&lt;/a&gt;&lt;span&gt; are designed.&lt;br&gt;&lt;br&gt;&lt;br&gt;&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Frecursive-ai&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>GenAI &amp; Agents</category>
      <pubDate>Thu, 09 Apr 2026 21:07:00 GMT</pubDate>
      <author>julia.tran@dataiku.com (Julia Tran)</author>
      <guid>https://www.dataiku.com/stories/blog/recursive-ai</guid>
      <dc:date>2026-04-09T21:07:00Z</dc:date>
    </item>
    <item>
      <title>Decision 4 of 7: when AI stack choices become career consequences</title>
      <link>https://www.dataiku.com/stories/blog/ai-stack-becomes-career-consequences</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/ai-stack-becomes-career-consequences" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Decision%20%234.png" alt="chart demonstrating AI stack choices" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;em&gt;&lt;span&gt;This is the fourth installment in our seven-part breakdown of insights from the report, "7 career-making AI decisions for CIOs in 2026." &lt;a href="https://pages.dataiku.com/cio-ai-decisions"&gt;&lt;u&gt;&lt;span style="color: #3edab2;"&gt;&lt;span style="color: #3edab2;"&gt;Read the full report here&lt;/span&gt;&lt;/span&gt;&lt;/u&gt;&lt;/a&gt;.&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/ai-stack-becomes-career-consequences" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Decision%20%234.png" alt="chart demonstrating AI stack choices" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;em&gt;&lt;span&gt;This is the fourth installment in our seven-part breakdown of insights from the report, "7 career-making AI decisions for CIOs in 2026." &lt;a href="https://pages.dataiku.com/cio-ai-decisions"&gt;&lt;u&gt;&lt;span style="color: #3edab2;"&gt;&lt;span style="color: #3edab2;"&gt;Read the full report here&lt;/span&gt;&lt;/span&gt;&lt;/u&gt;&lt;/a&gt;.&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fai-stack-becomes-career-consequences&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI Governance &amp; Architecture</category>
      <category>Reports &amp; Guides</category>
      <pubDate>Tue, 07 Apr 2026 13:00:05 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/ai-stack-becomes-career-consequences</guid>
      <dc:date>2026-04-07T13:00:05Z</dc:date>
      <dc:creator>Julia Berman</dc:creator>
    </item>
    <item>
      <title>Stop sequencing AI behind your data transformation</title>
      <link>https://www.dataiku.com/stories/blog/sequencing-ai-behind-data-transformation</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/sequencing-ai-behind-data-transformation" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/image1-Apr-01-2026-01-56-08-0444-PM.jpg" alt="Abstract data grid shadows suggesting AI infrastructure complexity" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Despite &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;88% of enterprises&lt;/a&gt; now deploying AI in at least one business function, &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;only 39% report&lt;/a&gt; any measurable impact on their bottom line &lt;span style="color: #0a0a0a; background-color: #ffffff;"&gt;—&lt;/span&gt; and most of those say it’s &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;less than 5%&lt;/a&gt;. The main culprit isn’t technology, talent, or funding, it's the persistent “learning gap” and the widespread belief that AI should only follow after infrastructure modernization.&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/sequencing-ai-behind-data-transformation" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/image1-Apr-01-2026-01-56-08-0444-PM.jpg" alt="Abstract data grid shadows suggesting AI infrastructure complexity" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;Despite &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;88% of enterprises&lt;/a&gt; now deploying AI in at least one business function, &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;only 39% report&lt;/a&gt; any measurable impact on their bottom line &lt;span style="color: #0a0a0a; background-color: #ffffff;"&gt;—&lt;/span&gt; and most of those say it’s &lt;a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai"&gt;less than 5%&lt;/a&gt;. The main culprit isn’t technology, talent, or funding, it's the persistent “learning gap” and the widespread belief that AI should only follow after infrastructure modernization.&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fsequencing-ai-behind-data-transformation&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Analytics Modernization</category>
      <category>AI Insights</category>
      <pubDate>Thu, 02 Apr 2026 13:00:02 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/sequencing-ai-behind-data-transformation</guid>
      <dc:date>2026-04-02T13:00:02Z</dc:date>
      <dc:creator>Faye Murray</dc:creator>
    </item>
    <item>
      <title>3 AI trends reshaping healthcare and life sciences in 2026</title>
      <link>https://www.dataiku.com/stories/blog/healthcare-life-sciences-ai-trends-2026</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/healthcare-life-sciences-ai-trends-2026" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/THUMBNAILS%202026%20AI%20in%20Healthcare%20%26%20Life%20Sciences%20Trends.png" alt="AI data analytics dashboard with lab test tubes in healthcare" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span style="color: #1f1f1f;"&gt;By the end of 2026, the "AI honeymoon" will be officially concluded. For healthcare and life sciences entities, AI is no longer an exploratory field; it has become deeply embedded in daily operations, critical processes, tasks, and deliverables at an industrial scale.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/healthcare-life-sciences-ai-trends-2026" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/THUMBNAILS%202026%20AI%20in%20Healthcare%20%26%20Life%20Sciences%20Trends.png" alt="AI data analytics dashboard with lab test tubes in healthcare" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span style="color: #1f1f1f;"&gt;By the end of 2026, the "AI honeymoon" will be officially concluded. For healthcare and life sciences entities, AI is no longer an exploratory field; it has become deeply embedded in daily operations, critical processes, tasks, and deliverables at an industrial scale.&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fhealthcare-life-sciences-ai-trends-2026&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>GenAI &amp; Agents</category>
      <category>Life Sciences</category>
      <pubDate>Wed, 01 Apr 2026 13:00:03 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/healthcare-life-sciences-ai-trends-2026</guid>
      <dc:date>2026-04-01T13:00:03Z</dc:date>
      <dc:creator>Michael Attlan</dc:creator>
    </item>
    <item>
      <title>Agent memory: the missing layer in enterprise AI systems</title>
      <link>https://www.dataiku.com/stories/blog/agent-memory</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/agent-memory" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Agent%20memory_%20the%20missing%20layer%20in%20enterprise%20AI%20systems-2.png" alt="technology stack on blue background" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;Ask most people to describe the memory of a large language model and they'll point to the &lt;/span&gt;&lt;span style="color: #00ffff;"&gt;&lt;a href="https://www.dataiku.com/stories/blog/context-engineering" style="color: #00ffff;"&gt;&lt;u&gt;context window&lt;/u&gt;&lt;/a&gt;&lt;/span&gt;&lt;span&gt;, the span of text the model can see during a given interaction. That's not wrong, but it's incomplete in a way that matters enormously for enterprise AI.&lt;br&gt;&lt;br&gt;The context window is more like working memory in the psychological sense: what's active right now. What LLMs lack by default is anything resembling long-term memory. When a session ends, nothing persists. The next conversation begins from zero.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;For consumer applications, a chatbot that helps draft an email or an assistant that answers a one-off question, this limitation is manageable. Users expect to re-explain themselves. The cost of forgetting is low. But enterprise AI workflows are increasingly different in character.&lt;br&gt;&lt;br&gt;They involve agents that execute multi-step tasks over hours or days, assistants that interact repeatedly with the same users across weeks and months, and systems that are supposed to get better over time by learning from past outcomes. For these applications, statelessness isn't an inconvenience. It's a fundamental architectural gap.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/agent-memory" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Agent%20memory_%20the%20missing%20layer%20in%20enterprise%20AI%20systems-2.png" alt="technology stack on blue background" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;Ask most people to describe the memory of a large language model and they'll point to the &lt;/span&gt;&lt;span style="color: #00ffff;"&gt;&lt;a href="https://www.dataiku.com/stories/blog/context-engineering" style="color: #00ffff;"&gt;&lt;u&gt;context window&lt;/u&gt;&lt;/a&gt;&lt;/span&gt;&lt;span&gt;, the span of text the model can see during a given interaction. That's not wrong, but it's incomplete in a way that matters enormously for enterprise AI.&lt;br&gt;&lt;br&gt;The context window is more like working memory in the psychological sense: what's active right now. What LLMs lack by default is anything resembling long-term memory. When a session ends, nothing persists. The next conversation begins from zero.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;For consumer applications, a chatbot that helps draft an email or an assistant that answers a one-off question, this limitation is manageable. Users expect to re-explain themselves. The cost of forgetting is low. But enterprise AI workflows are increasingly different in character.&lt;br&gt;&lt;br&gt;They involve agents that execute multi-step tasks over hours or days, assistants that interact repeatedly with the same users across weeks and months, and systems that are supposed to get better over time by learning from past outcomes. For these applications, statelessness isn't an inconvenience. It's a fundamental architectural gap.&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fagent-memory&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>GenAI &amp; Agents</category>
      <pubDate>Tue, 31 Mar 2026 18:48:04 GMT</pubDate>
      <author>julia.tran@dataiku.com (Julia Tran)</author>
      <guid>https://www.dataiku.com/stories/blog/agent-memory</guid>
      <dc:date>2026-03-31T18:48:04Z</dc:date>
    </item>
    <item>
      <title>Enterprise machine learning platforms: a buyer's guide for 2026</title>
      <link>https://www.dataiku.com/stories/blog/enterprise-machine-learning-platforms-how-to-choose</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/enterprise-machine-learning-platforms-how-to-choose" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/ML%20platforms%20in%202026_%20how%20to%20choose%20the%20right%20stack.png" alt="enterprise machine learning platforms 2026" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="line-height: 1.38;"&gt;&lt;span&gt;In many enterprises, machine learning is already in production. Models are trained by data science teams, deployed through cloud pipelines, surfaced in dashboards, and reviewed periodically for compliance. Each part works. But together, they rarely form a coherent system.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/enterprise-machine-learning-platforms-how-to-choose" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/ML%20platforms%20in%202026_%20how%20to%20choose%20the%20right%20stack.png" alt="enterprise machine learning platforms 2026" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p style="line-height: 1.38;"&gt;&lt;span&gt;In many enterprises, machine learning is already in production. Models are trained by data science teams, deployed through cloud pipelines, surfaced in dashboards, and reviewed periodically for compliance. Each part works. But together, they rarely form a coherent system.&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fenterprise-machine-learning-platforms-how-to-choose&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Machine Learning</category>
      <pubDate>Mon, 30 Mar 2026 13:00:03 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/enterprise-machine-learning-platforms-how-to-choose</guid>
      <dc:date>2026-03-30T13:00:03Z</dc:date>
      <dc:creator>Mark Abramowitz</dc:creator>
    </item>
    <item>
      <title>Decision 3 of 7: when AI agents require real accountability</title>
      <link>https://www.dataiku.com/stories/blog/ai-agents-require-real-accountability</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/ai-agents-require-real-accountability" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Decision%20%233.png" alt="cio decisions 2026 AI accountability " class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;em&gt;&lt;span&gt;This is the third installment in our seven-part breakdown of insights from the report, “7 career-making AI decisions for CIOs in 2026.” &lt;a href="https://pages.dataiku.com/cio-ai-decisions"&gt;&lt;u&gt;&lt;span style="color: #1155cc;"&gt;Read the full report here&lt;/span&gt;&lt;/u&gt;&lt;/a&gt;.&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/ai-agents-require-real-accountability" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Decision%20%233.png" alt="cio decisions 2026 AI accountability " class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;em&gt;&lt;span&gt;This is the third installment in our seven-part breakdown of insights from the report, “7 career-making AI decisions for CIOs in 2026.” &lt;a href="https://pages.dataiku.com/cio-ai-decisions"&gt;&lt;u&gt;&lt;span style="color: #1155cc;"&gt;Read the full report here&lt;/span&gt;&lt;/u&gt;&lt;/a&gt;.&lt;/span&gt;&lt;/em&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fai-agents-require-real-accountability&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>AI Governance &amp; Architecture</category>
      <category>Reports &amp; Guides</category>
      <pubDate>Fri, 27 Mar 2026 13:00:04 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/ai-agents-require-real-accountability</guid>
      <dc:date>2026-03-27T13:00:04Z</dc:date>
      <dc:creator>Julia Berman</dc:creator>
    </item>
    <item>
      <title>2026 Dataiku Frontrunner Awards: architecting the reasoning enterprise</title>
      <link>https://www.dataiku.com/stories/blog/2026-dataiku-frontrunner-awards</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/2026-dataiku-frontrunner-awards" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/1200x627_Blog.png" alt="dataiku 2026 front runner awards enterprise" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;Today, we are officially kicking off the &lt;a href="https://view-su2.highspot.com/viewer/967147793205200661c5a8f107c70fbb"&gt;&lt;strong&gt;2026 Dataiku Frontrunner Awards&lt;/strong&gt;&lt;/a&gt;. For our sixth anniversary, we are shifting the spotlight toward the reasoning enterprise: organizations that have moved beyond AI pilots and experimentation to achieve measurable performance, at scale.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/2026-dataiku-frontrunner-awards" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/1200x627_Blog.png" alt="dataiku 2026 front runner awards enterprise" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;Today, we are officially kicking off the &lt;a href="https://view-su2.highspot.com/viewer/967147793205200661c5a8f107c70fbb"&gt;&lt;strong&gt;2026 Dataiku Frontrunner Awards&lt;/strong&gt;&lt;/a&gt;. For our sixth anniversary, we are shifting the spotlight toward the reasoning enterprise: organizations that have moved beyond AI pilots and experimentation to achieve measurable performance, at scale.&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2F2026-dataiku-frontrunner-awards&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>Frontrunner Awards</category>
      <category>Dataiku in Action</category>
      <pubDate>Thu, 26 Mar 2026 13:00:02 GMT</pubDate>
      <guid>https://www.dataiku.com/stories/blog/2026-dataiku-frontrunner-awards</guid>
      <dc:date>2026-03-26T13:00:02Z</dc:date>
      <dc:creator>Jason Blanco</dc:creator>
    </item>
    <item>
      <title>Context engineering: building AI systems that scale</title>
      <link>https://www.dataiku.com/stories/blog/context-engineering</link>
      <description>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/context-engineering" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Prompt%20_%20context%20engineering.png" alt="context engineering building AI systems that scale" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;There's a version of the story where prompt engineering was always just a stepping stone. In 2022 and 2023, a certain kind of expertise emerged: the ability to coax better outputs from large language models (LLMs) by carefully crafting instructions.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;Practitioners learned that telling a model to “think step by step” improved its reasoning, that framing it as an expert could sharpen its tone, that negative examples were sometimes more powerful than positive ones. This was prompt engineering, and for a while it felt like the central skill of the AI era.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;It isn't anymore, or at least it's no longer sufficient on its own. As organizations push AI beyond demos and into production, the problems they encounter aren't primarily about how instructions are phrased. They're about what information the model sees, when it sees it, how that information was selected, and what happens when it turns out to be wrong. These questions belong to a different discipline: context engineering.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;The term, popularized by AI researcher Andrej Karpathy, describes the challenge of filling a model's context window with precisely the right information at each step of a workflow. It sounds deceptively simple. In practice, it touches nearly every layer of an AI system's architecture, and getting it wrong is one of the most common reasons that systems which look promising in development break down in production.&lt;/span&gt;&lt;/p&gt;</description>
      <content:encoded>&lt;div class="hs-featured-image-wrapper"&gt; 
 &lt;a href="https://www.dataiku.com/stories/blog/context-engineering" title="" class="hs-featured-image-link"&gt; &lt;img src="https://2123903.fs1.hubspotusercontent-na1.net/hubfs/2123903/Prompt%20_%20context%20engineering.png" alt="context engineering building AI systems that scale" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"&gt; &lt;/a&gt; 
&lt;/div&gt; 
&lt;p&gt;&lt;span&gt;There's a version of the story where prompt engineering was always just a stepping stone. In 2022 and 2023, a certain kind of expertise emerged: the ability to coax better outputs from large language models (LLMs) by carefully crafting instructions.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;Practitioners learned that telling a model to “think step by step” improved its reasoning, that framing it as an expert could sharpen its tone, that negative examples were sometimes more powerful than positive ones. This was prompt engineering, and for a while it felt like the central skill of the AI era.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;It isn't anymore, or at least it's no longer sufficient on its own. As organizations push AI beyond demos and into production, the problems they encounter aren't primarily about how instructions are phrased. They're about what information the model sees, when it sees it, how that information was selected, and what happens when it turns out to be wrong. These questions belong to a different discipline: context engineering.&lt;/span&gt;&lt;/p&gt; 
&lt;p&gt;&lt;span&gt;The term, popularized by AI researcher Andrej Karpathy, describes the challenge of filling a model's context window with precisely the right information at each step of a workflow. It sounds deceptively simple. In practice, it touches nearly every layer of an AI system's architecture, and getting it wrong is one of the most common reasons that systems which look promising in development break down in production.&lt;/span&gt;&lt;/p&gt;  
&lt;img src="https://track.hubspot.com/__ptq.gif?a=2123903&amp;amp;k=14&amp;amp;r=https%3A%2F%2Fwww.dataiku.com%2Fstories%2Fblog%2Fcontext-engineering&amp;amp;bu=https%253A%252F%252Fwww.dataiku.com%252Fstories%252Fblog&amp;amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "&gt;</content:encoded>
      <category>GenAI &amp; Agents</category>
      <pubDate>Wed, 25 Mar 2026 16:25:23 GMT</pubDate>
      <author>julia.tran@dataiku.com (Julia Tran)</author>
      <guid>https://www.dataiku.com/stories/blog/context-engineering</guid>
      <dc:date>2026-03-25T16:25:23Z</dc:date>
    </item>
  </channel>
</rss>
