× Attention

The talks will be in-person.

Stanford Robotics and Autonomous Systems Seminar series hosts both invited and internal speakers. The seminar aims to bring the campus-wide robotics community together and provide a platform to overview and foster discussion about the progress and challenges in the various disciplines of Robotics. This quarter, the seminar is also offered to students as a 1 unit course. Note that registration to the class is NOT required in order to attend the talks.

The course syllabus is available here. Go here for more course details.

The seminar is open to Stanford faculty, students, and sponsors.

Attedence Form

For students taking the class, please fill out the attendance form: https://tinyurl.com/robosem-spr-26 when attending the seminar to receive credit. You need to fill out 7 attedence to receive credit for the quarter, or make up for it by submitting late paragraphs on the talks you missed via Canvas.

Seminar Youtube Recordings

All publically available past seminar recordings can be viewed on our YouTube Playlist. Registered students can access all talk recordings on Canvas.

Get Email Notifications

Sign up for the mailing list: Click here!

Schedule Spring 2026

Date Guest Affiliation Title Location Time
Fri, Apr 03 Baxi Chong Penn State Mechanical intelligence in locomotion: from information theory to multi-legged robots Gates B03 3:00PM
Abstract

Locomotion in complex environments (e.g., rubble, leaf litter, granular media) is essential to mobile engineered systems such as robots. Effective locomotion requires complex control strategies to interact with terrain heterogeneity. Computational intelligence (CI), which typically includes rapid terrain sensing and active feedback controls, is a widely recognized component in locomotion strategy. Alternatively, mechanical intelligence (MI) - passive response to environmental perturbation governed by physics laws or mechanical constraints - is an important yet less studied component. In this talk, I will discuss 'why' and 'how' MI can contribute to effective locomotion using the examples of multi-legged robots (redundantly segmented bodies with simple legs). For the 'why,' I will quantify a specific MI that emerges from leg redundancy. By modeling locomotion as a stochastic process (analogous to signal transmission over noisy channels), I will show that MI, without any CI, is sufficient to generate reliable and effective locomotion. To explore the 'how,' I will take a quantitative analogy to signal transmission algorithms (e.g., error correcting/detecting codes) and propose a co-design coding scheme for multi-legged locomotion. Specifically, my talk will cover that (i) additional legs, with higher control dimensions, can enable a broader spectrum of capabilities, including load carrying/pulling, sidewinding, rolling, and obstacle-climbing; (ii) the inclusion of CI (feedback controls) can enhance multi-legged locomotion speed while preserving the feature of robustness; and (iii) CI might reduce the number of redundant legs required to navigate a particular terrain. Finally, I will discuss the coordination and competition between MI and CI in a broader framework termed Embedded Intelligence (EI) and illustrate the applications of MI-dominated systems in fields like search-and-rescue, agriculture, and the development of soft, micro, and modular robots.

Fri, Apr 10 Danfei Xu Georgia Tech Robot Learning from Human Experience: Science and Scaling Gates B03 3:00PM
Abstract

Modern AI advances by transferring knowledge from humans to machines at scale. Vision and language models learn from vast Internet data, but robot learning still relies heavily on slow, labor-intensive teleoperation. Recently this assumption has begun to shift: growing industrial efforts are collecting large amounts of human experience data to scale robot performance. As large-scale data collection becomes increasingly feasible, the central challenge shifts to understanding how robots can learn from human behavior. In this talk, I argue that human-to-robot transfer can be understood as two coupled problems: extracting priors about physical intelligence from human experience, and grounding those priors into a robot’s embodiment. I will revisit several of our recent works through this lens, showing how egocentric human data enables scalable learning of manipulation priors, while representation learning and cross-embodiment transfer address the grounding challenge. I will also discuss recent results showing emergent human-to-robot transfer from large-scale human pretraining, as well as evidence that learning across diverse robot embodiments can further improve transfer. Finally, I will introduce EgoVerse, an ecosystem for robot learning from embodied human data, and discuss how collaborative platforms can enable both rigorous science and organic data growth. I will conclude with future directions toward more human-centered robots that better understand human intent and collaborate naturally with people.

Fri, Apr 17 Karl Pertsch Physical Intelligence Developing the Ingredients
for Long-Horizon Robot Autonomy Gates B03 3:00PM
Abstract

Today's robots can perform impressive short-horizon tasks, but they're still far from the kind of autonomous agents we've now grown used to in the digital world; systems you can hand a complex, multi-step job and trust to figure it out. In this talk, I'll argue that getting there requires two ingredients we've historically struggled with: giving robot policies a sense of memory, and training generalist behaviors that are both broadly capable and high-performing. I'll discuss our recent progress on both fronts through two works, π0.6-MEM and π0.7, and sketch what's still missing on the path to long-horizon physical autonomy. Bio: Karl Pertsch is a member of the technical staff at Physical Intelligence. Before, he was a postdoc at UC Berkeley and Stanford. His work focuses on building generalist robot policies that can solve a wide range of physical manipulation tasks in the real world. Karl obtained his PhD from USC. During his PhD he interned at MetaAI and Google Brain. His work has been awarded the Best Conference Paper Award at ICRA'24, two Outstanding Paper Awards Finalists at CoRL'24, and a Best Paper Finalist at RSS'25.

Fri, Apr 24 Michael Yip UCSD Unlocking Autonomous Medical Robotics: From Image Guided Systems to Humanoid Robot Platforms Gates B03 3:00PM
Abstract

Despite remarkable progress in clinical adoption over the past two decades, surgical robotic systems remain large, physically constrained, and limited in autonomy. Today's commercial platforms are teleoperated and mechanically restricted to specific procedures, requiring trained teams to set up and operate them, and offering limited medical image integration and AI guidance. These conventional systems are also expensive and challenging to deploy and difficult to scale. Meeting the evolving needs of modern healthcare requires rethinking not just how robots are used in surgery, but what they can become—shifting from rigid, teleoperated tools toward fully autonomous robot partners in the operating room. This talk presents our research on enabling greater surgical autonomy through the lense of robot learning. I will close with our recent work on human-robot teaming using humanoid systems, examining the limitations of the current state of the art and the opportunities ahead for research and deployment.

Fri, May 01 Negahr Mehr UC Berkeley TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 08 Jiayuan Mao Amazon FAR, UPenn TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 15 Howie Choset CMU TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 22 Rob Platt Northeastern TBD Gates B03 3:00PM
Abstract

TBD

Fri, May 29 Nick Colonese Meta TBD Gates B03 3:00PM
Abstract

TBD

Sponsors

The Stanford Robotics and Autonomous Systems Seminar enjoys the support of the following sponsors.