Quantum particles and AI in the sky
Welcome back to Circuit Breaker, where you can find the latest news and updates on innovations in AI, quantum computing, semiconductors, and more, from across IBM Research and beyond.
This week's edition was created by Peter Hess , Mike Murphy , and Ashley Peterson .
This week on Circuit Breaker: Week of March 23 - 27
Using quantum computers to accurately simulate magnetic materials
Scientists studying new materials just got a new tool in their toolbox: quantum computing. New work shows that a quantum computer can produce high-fidelity simulations of experiments exploring the quantum behavior of exotic materials. Those experiments include probing particles with an accelerator-based neutron beam, which characterizes the physical and chemical properties of a substance without altering them.
☀️ A neutron source isn’t always close at hand, though, and a new preprint from IBM Research and its research partners shows that quantum computing can expand options to simulate experiments before investing in limited and expensive resources. It isn’t poised to replace a neutron source for materials science, but near-term quantum hardware can already provide physically meaningful scientific results in early experimental phases, according to the study.
🗞️ In the study, researchers compared real-world data from a neutron scattering experiment with classical and quantum-aided simulations. They showed a high degree of agreement between the simulations and the experimental results, indicating that quantum computing is a valid tool for materials science.
🧑💻 The researchers leveraged classical computing together with quantum using a technique called approximate quantum compiling, supporting IBM’s broader vision of quantum-centric supercomputing — a paradigm that uses both classical and quantum resources to achieve more than either technology could alone.
The latest study focused on a material that can be simulated with classical methods. Moving forward, the aim is to simulate more complex materials.
AI is turning turbulence into transcripts
🦘 Australia is massive, and for many people living or working in far-flung corners of the continent, emergency medical care could be a flight away. That’s where the Royal Flying Doctor Service of Australia comes in.
✈️ The charity flies planes and clinicians to some of the remotest parts of Australia. But when dealing with an emergency medical situation, every second matters. The clinicians need to write up their reports on their patients, making sure they share information like whether their patients need prescriptions or surgeries with the hospital they’re flying to, but those are moments away from attending to their patients.
🗣️ That led IBM to pilot a new system with RFDS, where a tiny version of Granite-Speech, incubated within IBM Research, was used to build a speech recognition system that could take a clinician’s voice and turn it into electronic health records, freeing up their hands to continue helping patients.
💻 The tiny model can run on just about any laptop or tablet, and in testing done by IBM researchers, they found the system to be more effective at understanding speech in the loud conditions of a small plane than any models from other major AI company.
Donating llm-d to the Cloud Native Computing Foundation
Scaling LLM inference is quickly becoming one of the toughest challenges in cloud computing, but the llm‑d framework offers solutions that are resonating with AI practitioners.
☁️ Built to make large‑scale model serving predictable, efficient, and vendor‑neutral, llm‑d transforms inference engines into fully distributed Kubernetes‑native systems. It’s a major milestone toward making high‑performance generative AI a core, open part of cloud‑native infrastructure.
📊 Backed by reproducible benchmarks, validated deployment patterns, and deep Kubernetes integration, llm‑d offers organizations a clear, well‑lit path from tinkering to production at scale.
On the ground at All Things AI 2026
IBM had a big presence at All Things AI in Durham, North Carolina, this week. Several key researchers hosted talks and demos at IBM's Generative Computing lounge on open-source projects like Mellea and Docling, as well as IBM's small language model family, Granite.
👷 Luis Lastras , IBM Research’s director of language technologies, gave an opening keynote to an audience of AI developers and builders, making the case for treating AI like traditional software engineering, enabled by modular, testable components that can actually scale to production and improve with open, collaborative development.
🤏 One theme resonated throughout the two days: Small models are big, and an orchestrated model strategy is key. Developers (and users) want to understand how they can make their work and lives easier with AI. Cost, reliability and flexibility, are also still big considerations.
Cleveland Clinic and IBM’s technical first: using quantum and classical processors to simulate proteins
IBM debuts new AI-enabled digital experiences for the 90th Masters tournament
IBM Bob is here
More praise for Turing Award winner Charles H. Bennett
Highlighting new publications from IBM researchers that we liked the sound of:
If you liked this, please consider following IBM Research on LinkedIn. If you have any questions, whether they're atomically small or the size of a quantum computer, don't hesitate to reach out.
And if you want to go even deeper, subscribe to our monthly newsletter, Future Forward for more on the latest news on breakthroughs in AI, quantum computing and hybrid cloud.
IBM•140 followers
2wGreat one indeed
it.lopez-be.ch & Lopez Codes•481 followers
4w🫡🫡🫡
Smart Mousepad ~…•862 followers
4w🌳https://TreeOfKnowledge.eu Core of XAI (Explainable AI).
IBM•679 followers
4wGreat roundup! Exciting to see advancements across such diverse areas—from quantum computing to real-world AI applications in healthcare. The momentum in AI innovation is incredible.