We just wrapped Nemotron Dev Days Korea — a two-day intensive bringing together 200 elite developers to push the boundaries of sovereign AI. In this session we break down what happened, what we learned, and what it means for builders everywhere. We'll open with a recap of the Korea hackathon: the three tracks (Creative Use-case, Domain & Persona Tuning, and Synthetic Data Generation), standout submissions, and a closer look at the winning teams and what they built. You'll hear directly about how participants used Nemotron models, NeMo-RL, NeMo Curator, and the broader NVIDIA stack to build production-ready pipelines in under 36 hours. Then we zoom out to the bigger picture — sovereign AI model builders across Korea and the region are increasingly turning to Nemotron as the foundation for domain-adapted, culturally-aware models. We'll walk through how teams are leveraging the Nemotron SDK, curated datasets, and NVIDIA's post-training recipes (CPT, SFT, RL) to build models that go well beyond out-of-the-box performance. We close with a live Build-a-Claw segment — a step-by-step walkthrough of setting up NemoClaw to deploy your own autonomous agent, using the same blueprint that hackathon teams had access to. What you'll learn: - Highlights from Nemotron Dev Days Korea — tracks, results, and winning builds - How sovereign AI builders are using the Nemotron SDK, datasets, and post-training recipes in real-world deployments - What "Domain & Persona Tuning" looks like in practice for Korean-language and domain-specific models - How to get started with Build-a-Claw: installing NemoClaw and deploying your first autonomous agent
About us
Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.
- Website
-
http://nvda.ws/2nfcPK3
External link for NVIDIA AI
- Industry
- Computer Hardware Manufacturing
- Company size
- 10,001+ employees
- Headquarters
- Santa Clara, CA
Updates
-
What’s possible with Gemma 4 + DGX Spark? Tune in Friday as NVIDIA and Google DeepMind experts walk through vision translation, long-context document Q&A, and real-time code gen demos live. Bring your questions. 📆 Friday, April 24 at 11:00 AM PDT Add to schedule ➡️ https://nvda.ws/4vP77yg
-
-
NVIDIA AI reposted this
GPT-5.5 is here! Have been daily driving this model for a few weeks and it's speed and capability to handle complex work is really impressive. Codex has quickly become the go-to coding agent for engineers at NVIDIA.
-
NVIDIA AI reposted this
I’ve had access to Codex2 and GPT 5.5 for about 2 weeks now OpenAI just "collison-installed" all of NVIDIA lol They set up a lab like a Genius Bar so everyone could get set up w Codex 2 With the CLIs we’ve been building, non-technical coworkers seemed to have the biggest unlock Our stack: • We rolled out cloud VMs for every employee. Simple rule, agents get their own computers just like employees. If something goes wrong we can freeze it and get a stack trace. • Codex team has been super responsive. Codex 2 now supports any cloud vm, quickly picks up ssh config. Non-technical users just paste a prompt we gave them to edit ssh config • Internally built CLIs + those vetted by security get automatically loaded in cloud VMs. Teams wake up to new capabilities daily now. NVIDIA already moved fast now we’re rippin 🤙
-
-
Massive congrats to the team at OpenAI 💚 Moving the frontier forward isn’t easy, and GPT-5.5 does exactly that, with impressive performance across agentic coding, research tasks, and beyond.
Introducing GPT-5.5 A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. GPT-5.5 excels at writing and debugging code, researching online, analyzing data, creating documents and spreadsheets, operating software, and moving across tools until a task is finished. GPT-5.5 is rolling out today for Plus, Pro, Business and Enterprise users across ChatGPT and Codex. We’re also introducing GPT-5.5 Pro for Pro, Business, and Enterprise users in ChatGPT. https://lnkd.in/g7KQNhiG
-
Join the developers on the CUDA Communication Library team to learn more about how you can support CUDA applications on clusters of GPUs. Communication libraries like NVSHMEM enable multiple GPUs to work in parallel on large-scale AI training, simulation, or rendering tasks by exchanging data and synchronizing tasks. In this session, we will cover: - How NVSHMEM extends the OpenSHMEM APIs to support clusters of NVIDIA GPUs. - API calls to collectively launch CUDA kernels across a set of GPUs. - Live demo of NVSHMEM in action. - Live Q&A with our panel of library developers and experts.
CUDA Live: Scaling HPC with Multi-GPU Communication Libraries
www.linkedin.com
-
Discover the groundbreaking projects from the SJSU Agents for Impact Hackathon! With over 100 teams competing during GTC, the winning student developers will showcase how they leveraged the power of NVIDIA Nemotron models and NVIDIA NIM microservices to create AI agents that solve real-world problems in sustainability, education, and accessibility. Hear directly from the creators of CarbonSense AI (an agent for carbon-aware AI operations), LoominAi (a physics-based 3D simulation learning sandbox), and AccessAudit (a platform for ADA-compliant accessibility assessments) as they share their design process, technical deep dives, and lessons learned in building high-impact, production-ready generative AI applications.
Dev Community Live: SJSU Hackathon Winners – Building Impactful Agents
www.linkedin.com
-
Scaling NemoClaw: Roadmap, OpenClaw Collaboration, and Real‑World Integration ... What’s next for NemoClaw? This session takes you inside the roadmap — how NemoClaw is evolving to support larger, faster, and more automated agent deployments built on open models like Nemotron. We’ll explore ongoing work around scaling, model routing, and deeper framework integrations, plus how NVIDIA’s active contributions to OpenClaw are strengthening the open foundation that NemoClaw builds on. What you’ll learn: - How NemoClaw is scaling to enable multi‑agent automation and policy‑driven orchestration. - How improvements in routing and model selection will enhance performance and adaptability across open and proprietary models. - How NVIDIA and the OpenClaw community are collaborating to extend secure, open runtime capabilities. - How to prepare your deployments for upcoming roadmap milestones. Join us live for a preview of the roadmap, direct Q&A with the product team, and come with your questions about how open models like Nemotron and open runtimes like OpenClaw are shaping the next generation of AI agents.
Scaling NemoClaw: Roadmap, OpenClaw & Integration | Nemotron Labs
www.linkedin.com
-
Gemma 4 introduced a powerful new family of native multimodal and multilingual models that scales across the full spectrum of NVIDIA hardware - from Blackwell in the data center to Jetson at the edge. In this stream, we’re going hands-on with the DGX Spark to see how it can amplify Gemma 4’s features, including massive 256K token context window and native vision/audio capabilities. Bring your questions! We’ll have experts from NVIDIA and Google DeepMind.
DGX Spark Live: Ask the Experts - Gemma 4 on DGX Spark
www.linkedin.com
-
Improve agentic performance with accurate RL post-training on low-precision FP8. 🛠️ NVIDIA NeMo RL, an open-source library within NVIDIA NeMo, supports FP8 to speed up RL workloads by 1.48x on Qwen3-8B-Base—enabling faster iterations for agentic tool use and multi-step workflows. Read ➡️ https://nvda.ws/4vLRl7t
-