Today we’re introducing Muse Spark, our most powerful model yet, giving you a faster and smarter Meta AI. Muse Spark currently powers the Meta AI app and website and will be rolling out to WhatsApp, Instagram, Facebook, Messenger, and AI glasses in the coming weeks. https://go.meta.me/4f86c1
local, private?
Future competition may no longer be just about model performance, but about who can integrate more naturally into users' daily lives.
Interesting launch. What feels most important is the direction behind it: the future of AI may depend not only on model capability, but on how well it is woven into real user journeys across platforms. That is where AI can move from novelty to lasting value.
I urgently request a review and reinstatement of my WhatsApp Business account, as it is critical to my livelihood. I depend entirely on this account for all my business communications, and its suspension has severely impacted my operations. I have always made every effort to comply with WhatsApp’s policies and guidelines. If there has been any unintentional violation on my part, I sincerely apologize and assure you it was not deliberate. I respectfully urge you to prioritize my request and restore my account as soon as possible. This matter is extremely important to me, and I would be truly grateful for your prompt assistance. Account number: 9205900913
Sounds very exciting, hope it lives up to expectations! Wearable AI expansion- however a secondary thought will be a new pocket of data being used for marketing. Our visual data will be used for more accurate marketing / with or without the audio perhaps. As well as need to see the privacy and default settings. And combined with facial recognition, audio from others, lip-reading recognition, it’s a new era of businesses coming
Wondering when customer service is gonna be able to fix my son's VR account. Its been weeks now with nothing but automated responses from support. Seems like Meta fired all their customer service and replaced them with this AI. Still looking for help so my son can use his VR. A shame I have to make a LinkedIn account and make comments to try and reach out to real person who works there for help.
As someone already using Ray-Ban Meta AI glasses, I’m particularly interested in how this upgrade enhances real-time experiences across devices. Looking forward to seeing how seamlessly this integrates across platforms and elevates everyday use cases.
The "Thinking mode" with parallel subagents is fascinating—one agent drafts, another compares, a third finds options, all at once. That's a fundamentally different architecture from standard chat. For those building with LLMs: how much does parallel agent processing change what's possible vs. single-model reasoning? And Meta opening API access? Big move
KayTop Creative•5K followers
2wMeta Everyone is talking about multimodal and benchmarks… But the signal is in one phrase: “multi-agent orchestration” That’s not a feature. That’s an architecture shift. Instead of forcing one model to think harder, Meta is distributing cognition across agents in parallel. Which means: Speed + depth are no longer trade-offs. If this direction holds… We’re entering an era where: The real leverage isn’t prompting AI… It’s designing how AIs collaborate. That’s a very different skill set.