Winter 2026 Colloquium

Organizers: Sidd Srinivasa, Abhishek Gupta, Maya Cakmak, Jamie Coyne

Accelerating Robotics Startups
Emer Dooley (UW Foster School of Business, Creative Destruction Lab) 01/09/2026

Biography: Emer Dooley runs Creative Destruction Lab. CDL-Seattle is a mentoring program for deep-tech startups capable of massive scale in both Computational Health and Advanced Manufacturing. Emer started as a hardware designer, working for DEC in Ireland and then Boston. After a UW MBA, she joined a Seattle computer-telephony startup. She returned to the UW and has worked since on technology commercialization with Computer Science, Bioengineering, Global Health and others. She raised and ran the first AoA Seed Fund and is an angel investor. She has been very involved in the Seattle startup community as an angel investor and raised and ran the first Alliance of Angels Seed Fund, a $4.5 million fund. She is on the board of Ashesi University Foundation, and a past board member of the Washington Research Foundation, Women’s World Banking, Social Venture Partners and others. Emer has a BSc in Electronic Engineering and M.Eng. from the University of Limerick and an MBA and PhD. from the Foster School at the UW.

Battery-free Gram-scale Robots that Move Autonomously
Kyle Johnson (University of Washington) 01/16/2026

Abstract: In this talk I will present battery-free gram-scale robots that can autonomously fly in the wind or drive independently on the ground using microwatts of harvested energy from light or radio waves. These mobile sensing platforms may have transformative impact in applications from agricultural or extraterrestrial monitoring, industrial or hazardous environmental inspection, and reconfigurable camera networks for private security. This work challenges the conventional assumption that locomotion is beyond the reach of battery-free robots, demonstrates several approaches for achieving autonomous operation in realistic application scenarios, and opens up a discussion on the practicality of large scale mobile sensor deployments in remote environments. I will discuss how miniaturizing robots to near the gram scale can significantly reduce their energy requirements, which when combined with cyber-mechanical innovations can enable autonomous battery-free mobility. I will explain how we leveraged origami to create shape changing leaf-out origami robots that can fly in the wind to disperse sensors. I will also explain how we leveraged intermittent motion to enable battery-free robots that can roll around on the ground. Finally, I will present preliminary work towards creating miniaturized helicopters and multimodal jumping robots.

Biography: Kyle Johnson is a graduating Ph.D. Candidate in Computer Science & Engineering at the University of Washington (UW) and the Co-founder and Executive Director of the nonprofit AVELA - A Vision for Engineering Literacy & Access. He works with Professors Vikram Iyer and Sawyer Fuller to explore how combinations of low-power actuation and control mechanisms can be used to create autonomous microrobots optimized for resource constrained applications. These include technologies for battery-free onboard actuation, wireless communication, remote sensing, and control. Kyle is also passionate about helping decrease opportunity gaps in the education system for underserved youth. He has supported over 500 different college instructors in teaching to more than 6,000 K-14 students since 2019. Findings from this wide-scale outreach are viewable in the IEEE's World Engineering Education Forum (WEEF) and Black Issues in Computing Education (BICE) Conferences. Kyle’s work has been recognized by the Quad Fellowship, NSF Graduate Research Fellowship Program, Amazon Science Hub Fellowship, Washington NASA Space Grant Consortium, and the National GEM Consortium. His publications have appeared in Science Robotics and the ACM MobiCom Conference and have garnered widespread media attention including by the NSF, GeekWire, Popular Science, and IEEE.

Building Robotics Foundation Models with Reasoning in the Loop
Jiafei Duan (University of Washington) 01/23/2026

Abstract: Recent advances in generative AI have demonstrated the power of scaling: large language and vision models trained on internet-scale data now exhibit remarkable capabilities in perception, generation, and reasoning. These successes have inspired growing interest in bringing foundation-model paradigms to robotics, with the goal of moving beyond task-specific autonomy in constrained environments toward general-purpose robots that can operate robustly in open-world settings. However, robotics fundamentally differs from language and vision. Robot learning cannot rely on passive internet data at scale, and collecting large-scale, high-quality embodied interaction data remains expensive and slow. As a result, simply scaling data and model parameters is insufficient. To build general-purpose and robust robotics foundation models, we must instead ask: how can robots learn more from less data—and continue to improve over time? In this talk, I argue that reasoning in the loop offers a promising path forward. Rather than treating reasoning as a downstream capability applied after learning, I show how reasoning can be integrated directly into the learning process itself. This enables robots to learn from structured feedback, temporal context, and failure, thereby compensating for data scarcity and improving generalization. I will present a unified research agenda along three axes. First, I introduce approaches for spatial reasoning, enabling robots to ground language in 3D space and reason about object relationships for precise manipulation. Second, I discuss temporal reasoning, focusing on memory-centric models that retain, query, and reason over past observations to support long-horizon, high-precision control. Third, I show how reasoning over failures allows robots to understand why actions fail and use that understanding to self-improve, increasing robustness without additional supervision. Together, these results reframe robotics foundation models as systems that learn through reasoning, closing the loop between perception, action, and structured inference to enable self-improving autonomy.

Biography: Jiafei Duan is a Ph.D. candidate in Computer Science & Engineering at the University of Washington, advised by Professors Dieter Fox and Ranjay Krishna. His research focuses on robotics foundation models, with an emphasis on scalable data collection and generation, grounding vision–language models in robotic reasoning, and improving robust generalization in robot learning. His work has been featured in MIT Technology Review, GreekWire, VentureBeat, and Business Wire. Jiafei’s research has appeared in top AI and robotics venues, including ICLR, ICML, RSS, CoRL, ECCV, IJCAI, CoLM, and EMNLP, and has received several honors, including Best Paper at Ubiquitous Robots 2023, Best Paper at the CoRL RememberRL Workshop 2025, and a Spotlight Award at ICLR 2024.

The NASA Volatiles Inspecting Polar Exploration Rover (VIPER) Mission
Terry Fong (NASA) 01/30/2026

Abstract: The Volatiles Investigating Polar Exploration Rover (VIPER) is a NASA mission designed to explore the extreme environment of the Moon in search of water ice. VIPER is intended to land at the South Pole of the Moon and spend approximately 100-days mapping and surveying four different "ice stability regions". Determining the distribution, physical state and composition of water ice deposits will help increase understanding the sources of lunar polar water, as well as providing insight into the distribution and origin of volatiles across the solar system. In this talk, I will present an overview of the VIPER mission, the robot's design, lunar surface simulation, and interactive mission operations. During VIPER's exploration of the Moon, the rover will endure extreme temperature conditions (40K to 300K), dynamic lighting and complex terrain, while near-real-time rover driving will present new planetary mission operational challenges. VIPER is scheduled to launch to the Moon in August 2027 on-board Blue Origin's "New Glenn" rocket and "Blue Moon Mark 1" (MK1) lunar lander.

Biography: Terry Fong is NASA's Senior Scientist for Autonomous Systems and the Chief Roboticist at the NASA Ames Research Center. Terry is also the lead Rover Driver and former deputy manager for NASA's VIPER lunar rover mission. Terry previously led development of the Astrobee free-flying robot, which was installed on the Space Station in 2019. Terry has published more than 175 papers in space and field robotics, human-robot interaction, virtual reality, and planetary mapping. Terry received his B.S. and M.S. in Aeronautics and Astronautics from MIT and his Ph.D. in Robotics from Carnegie Mellon University.

Stages of Robot Learning
Dinesh Jayaraman (University of Pennsylvenia) 02/06/2026

Abstract: Recent enthusiasm in robotics has leaned heavily toward brute‑force scaling of control, often overlooking the fundamental constraints that define real robots—limits on power, compute, time, data, and other scarce resources. My group has been working along two complementary fronts: pushing the boundaries of what today’s dominant approaches can achieve, and developing design principles for future robots that are both masterful and minimalist. Along the way, we have produced generalist robot policies without any robot data, enabled quadruped robots to perform dynamic “circus tricks” on yoga balls through automatically tuned approximate simulations, and investigated the sensory requirements of robot learners to chart paths toward more efficient systems. This talk will situate these examples within a three‑stage view of robot learning—pretraining → finetuning → pruning—and outline how such a lifecycle can move robotics closer to real‑world utility.

Biography: Dinesh Jayaraman is an assistant professor at the University of Pennsylvania's CIS department and GRASP lab. He leads the Perception, Action, and Learning (Penn PAL) research group, which works at the intersections of computer vision, robotics, and machine learning. Dinesh received his PhD (2017) from UT Austin, before becoming a postdoctoral scholar at UC Berkeley (2017-19). Dinesh's research has received a Best Paper Award at CORL '22, a Best Paper Runner-Up Award at ICRA '18, a Best Application Paper Award at ACCV '16, the NSF CAREER award '23, an Amazon Research Award '21, and been covered in The Economist, TechCrunch, and several other press outlets.

Orbital Robotics: AI, Robotics, and Autonomy for Orbital Logistics
Aaron Borger (Co-founder and CEO of Orbital Robotics) 02/13/2026

Abstract: For years the space industry has followed a throwaway culture. Recently, companies like SpaceX and Blue Origin pioneered reusable rockets enabling access to space at a much lower cost. However, we continue to follow the single use paradigm on-orbit. Currently satellites that run out of fuel or experience failures remain abandoned in orbit as space debris. On Earth we have infrastructure such as gas stations and tow trucks, Orbital Robotics aims to build similar infrastructure on-orbit using spacecraft equipped with robotic arms controlled by AI that can capture, repair, refuel, and upgrade spacecraft on orbit. While traditional control algorithms fail under the complexity of manipulation and dynamic coupling between the robotic arms and the base of the spacecraft, AI and deep reinforcement learning provides a solution. Additionally, a unique perception system must be used to detect, track, and understand the objects in orbit with minimal prior knowledge of the objects. In this talk, I will discuss the need for these capabilities, the challenges of capturing spacecraft with robotic arms, and provide insight into the robots and AI solutions Orbital Robotics is building.

Biography: Aaron Borger, Co-founder and CEO of Orbital Robotics, has been working to perform complex operations with spacecraft equipped with robotic arms since he was an undergraduate student. He finished off his senior year by launching a payload containing two robotic arms designed to throw and catch a ball on-board a NASA rocket. The team Aaron led, including Riley Mark, now co-founder and lead hardware engineer at Orbital Robotics, continued to launch four additional robotic arms to space. Following his undergraduate work, Aaron was a lead software engineer at Blue Origin where he developed AI/ML algorithms to predict the health of rocket engine components and led the development of the flight software for the BE-7 lunar lander engine which is designed to land humans on the moon. Following Blue Origin, Aaron pursued a Ph.D. in aerospace dynamics and controls focused on servicing satellites with AI controlled robotic arms.

Evaluating policies without breaking your robots
Florian Shkurti (University of Toronto) 02/20/2026

Abstract: As our field becomes increasingly better at training multi-task manipulation policies, concerns regarding how to comprehensively and safely evaluate these policies are becoming more prevalent. In this talk I will present my group's recent projects on off-policy evaluation of diffusion policies based on compositional stitching; policy evaluation via action-conditional video models; and policy evaluation via generation of transferable adversarial scenarios. A key theme will be how to make the most out of data we already have, and leverage imperfect learned models to enable policy evaluation for manipulation. I will also outline promising research directions going forward.

Biography: Florian Shkurti is an Assistant Professor at the Department of Computer Science at the University of Toronto, where he directs the Robot Vision and Learning Lab. He's spending his sabbatical year as a Visiting Research Scientist at Ai2 in Dieter Fox's robotics group. He is a faculty member at the University of Toronto Robotics Institute, the Vector Institute, and the Acceleration Consortium for lab automation. His research broadly spans robotics, machine learning, and computer vision. He has received the Alexander Graham Bell Doctoral Award, the AAAI Robotics Fellowship, the Amazon Research Award in Robotics, the Connaught New Researcher Award, the TRI Young Faculty Researcher award, and the AI2050 Fellowship from Schmidt Sciences.

Beyond Physical Intelligence: Why Generalist Robots Require Social Intelligence
Marynel Vazquez (Yale University) 02/27/2026

Abstract: As the robotics industry moves toward deploying generalist agents in unstructured human environments, such as homes and workplaces, the research focus has largely remained on physical intelligence. While mastering physical tasks is essential, social intelligence is a critical missing piece for widespread technology adoption. To be truly effective, robots must understand, navigate, and manage the nuances of interpersonal interaction.

 In this talk, I will discuss two aspects of social intelligence that are fundamental for Human-Robot Interaction (HRI). The first one pertains understanding social phenomena that emerges in group interactions and that robots can potentially leverage to navigate complex social situations. The second one is implicit human feedback, i.e., communicative signals that are given off “for free” by humans and that require interpretation. Robots can leverage such implicit feedback to predict how people perceive them and better collaborate with users. Finally, I will reflect on how the latest advancements in machine learning are fundamentally reshaping the way we approach research in Human-Robot Interaction.

Biography: Marynel Vázquez is an Assistant Professor in Yale’s Computer Science Department, where she leads the Interactive Machines Group. Her research investigates fundamental problems in Human-Robot Interaction and Artificial Social Intelligence, often motivated by challenges or opportunities that arise in group human-robot interactions. Marynel received her bachelor's degree in Computer Engineering from Universidad Simón Bolívar in 2008, and obtained her M.S. and Ph.D. in Robotics from Carnegie Mellon University in 2013 and 2017, respectively. Before joining Yale, she was a collaborator of Disney Research and a Post-Doctoral Scholar at the Stanford Vision & Learning Lab. Marynel received a 2024 AFOSR YIP Award, a 2022 NSF CAREER Award, two Amazon Research Awards, and a Google Research Scholar award. Her work has been recognized with a 2025 IJCAI Early Career Spotlight, best paper awards at ACM/IEEE HRI 2023 and IEEE RO-MAN 2022 as well as nominations for paper awards at ACM/IEEE HRI 2021, IEEE IROS 2018, and IEEE RO-MAN 2016

Andrea Bajcsy (Carnegie Mellon University) 03/06/2026
Yash Narang (NVidia) 03/13/2026