Autumn 2022 Colloquium

Organizers: Selest Nashef, Josh Smith, Byron Boots, Maya Cakmak, Dieter Fox, Abhishek Gupta, Siddhartha S. Srinivasa

The Neuromechanics of Walking Ability Limitations in Old Age: From Mechanisms to Engineered Solutions
Jason Franz (UNC/NCSU) 10/07/2022

Abstract: There is a critical and immediate need for innovation in our study of the biomechanics and neural control of movement toward more effective translational efforts to preserve walking ability and mitigate falls due to aging and gait pathology. I will discuss recent discoveries from the four principal lines of research in our Applied Biomechanics Laboratory to meet this need. Specifically, our research is organized to address four specific challenges faced by those in our rapidly aging population: gait performance, instability and falls, osteoarthritis, and muscular fatigue. Examples will include using ultrasound to quantify aging effects on the structure and function of muscles and tendons that power walking and thereby inform bio-inspired wearable technologies, sensory and mechanical perturbations that help us understand and mitigate instability and falls, and real-time biofeedback to establish cause-effect mechanisms underlying the onset and progression of osteoarthritis for precision rehabilitation. Throughout, I will emphasize the need for mechanisms-based approaches to catalyze impact in the field of rehabilitation engineering.

Biography: Dr. Franz received his B.S. (2004) and M.S. (2006) degrees in Engineering Mechanics from Virginia Tech and, after serving as a staff scientist in PM&R at the University of Virginia, received his Ph.D. (2012) in Integrative Physiology from the University of Colorado, Boulder. He then completed an NIH Post-Doctoral Fellowship in the Department of Mechanical Engineering at the University of Wisconsin-Madison. In 2015, Dr. Franz joined the Joint Department of Biomedical Engineering at the University of North Carolina at Chapel Hill and North Carolina State University and is now an Associate Professor and Director of the UNC Applied Biomechanics Laboratory. He currently serves as Principal Investigator or Co-Investigator on multiple NIH-funded research projects, all predominantly focused on rehabilitation engineering strategies to mitigate age- and disease-related mobility impairment and falls risk.

Risk-Aware Planning and Control in Unstructured Environments
Anushri Dixit (California Institute of Technology) 10/14/2022

Abstract: Robots provide the crucial ability to replace humans in environments that are inaccessible due to environmental hazards such as search and rescue operations. They need to be able to reason about the risk in an environment that is perceptually degraded and complete tasks while maintaining safety. Providing safety and performance guarantees for motion planning and control algorithms is a well-studied problem for robotic systems with well-known dynamics that operate in structured environments. However, when robots operate in a real-world setting where the environment is dynamic and unstructured, common distributional assumptions used to develop the planning algorithms are no longer valid and consequently, the safety guarantees no longer hold. In this talk, I will focus on risk-aware methodologies for robotic autonomy in unstructured environments. I will provide techniques to account for uncertainty in static, extreme terrain and in dynamic environments. I will introduce a theoretical framework for motion planning while accounting for risk in a model predictive control setting. The risk-aware control policies are distributionally-robust to the uncertainty in the environment and provide probabilistic guarantees for task completion and recursive feasibility. The goal of the talk is to understand how robots interpret the ambiguity in their environment, and ways to generate policies that better account for this uncertainty.

Biography: Anushri Dixit is a Ph.D. candidate at California Institute of Technology in Control and Dynamical Systems and is advised by Prof. Joel Burdick. Her research focuses on motion planning and control of robots in extreme terrain while accounting for uncertainty in a principled manner. Her work on risk-aware methodologies for planning has been deployed on various robotic platforms as a part of the Team CoSTAR's effort in the DARPA Subterranean Challenge. She has received the DE Shaw Zenith Fellowship and was selected as a rising star at the Southern California Robotics Symposium. Prior to her Ph.D., she earned her B.S. in Electrical Engineering from Georgia Institute of Technology in 2017.

Towards Human-Friendly Robots
Joohyung Kim (University of Illinois Urbana-Champaign) 10/21/2022

Abstract: The demand for robots which can interact physically with human has been growing. Such robots can already be found serving and entertaining people in some places, such as airports, restaurants, and amusement parks. However, despite advances in related technologies, there are very few robotic applications to meet the public’s expectations. To make robots help humans in daily life, we need better understanding of human environments and tasks, better methods to perform the tasks through robots, and better design to interact with human naturally and safely. In this talk, I will present my works and experience to make human-friendly robots by means of robot design, motion control, and human-robot interaction.

Biography: Joohyung Kim is currently an Associate Professor of Electrical and Computer Engineering in University of Illinois Urbana-Champaign. His research focuses on design and control for humanoid robots, system for motion learning in robot hardware, and safe human-robot interaction. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science (EECS) from Seoul National University, Korea, in 2001 and 2012. He was with Disney Research as a Research Scientist from 2013 to 2019. Prior to joining Disney, he was a postdoctoral fellow in the Robotics Institute at Carnegie Mellon University for DARPA Robotics Challenge in 2013. From 2009 to 2012, he was a Research Staff Member in Samsung Advanced Institute of Technology, Korea, developing motion controllers for humanoid robots.

Robot Learning from Imperfect and Inexpert Teachers
Taylor Kessler Faulkner (UW) 10/28/2022

Abstract: The ability to adapt and learn can help robots deployed in dynamic and varied environments. While in the wild, the data that robots have access to includes input from their sensors and the humans around them. The ability to utilize human data increases the usable information in the environment. However, human data can be noisy, particularly when acquired from inexpert teachers. Rather than relying on experts to give feedback to learning robots, my research addresses methods for learning from imperfect human teachers that can increase the learning speed and dependability of robots.

Biography: Taylor Kessler Faulkner is a postdoctoral scholar and UW Data Science Postdoctoral Fellow in Siddhartha Srinivasa's Personal Robotics Lab at the University of Washington. She graduated from UT Austin in August 2022 with a PhD in Computer Science, where she worked with Prof. Andrea Thomaz in the Socially Intelligent Machines Lab and was funded for three years by a NSF Graduate Research Fellowship. Taylor received her B.S. in Computer Science from Denison University in 2016, with minors in Mathematics and Music Performance, graduating summa cum laude with a President's Medal for her work in the Math and Computer Science Department and involvement in the Music Department. Taylor's research enables robots to learn from imperfect human teachers using interactive reinforcement learning. People may not fully understand how robots should complete a task, or they may not have long periods of time available to advise learning robots. Her goal is to create algorithms that allow robots to learn from these potentially inaccurate or inattentive teachers.

Coming of Age of Robot Learning
Pulkit Agrawal (MIT) 11/04/2022

Abstract: Robots are getting smarter at converting complex natural language commands describing household tasks into step-wise instructions. Yet, they fail to actually perform such tasks! A prominent explanation for these failures is the fragility and inability of the low-level skills (e.g., locomotion, grasping, pushing, object re-orientation, etc.) to generalize to unseen scenarios. In this talk, I will discuss a framework for learning low-level skills that surpasses limitation of current systems at tackling contact-rich tasks and is real-world-ready: generalizes, runs in real-time with onboard computing, and uses commodity sensors. I will describe the framework using the following case studies: (i) a dexterous manipulation system capable of re-orienting novel objects. (ii) a quadruped robot capable of fast locomotion and manipulation on diverse natural terrains. (iii) learning from a few task demonstrations of an object manipulation task to generalize to new object instances in out-of-distribution configurations.

Biography: Pulkit is the Steven and Renee Finn Chair Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT, where he directs the Improbable AI Lab. His research interests span robotics, deep learning, computer vision, and reinforcement learning. His work received the Best Paper Award at Conference on Robot Learning 2021 and Best Student Paper Award at Conference on Computer Supported Collaborative Learning 2011. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Research Award, a Fulbright fellowship, etc. Before joining MIT, he co-founded SafelyYou Inc., received his Ph.D. from UC Berkeley, and Bachelor's degree from IIT Kanpur, where he was awarded the Directors Gold Medal.

VETERANS DAY 11/11/2022
Representations in Robot Manipulation: Learning to Manipulate Ropes, Fabrics, Bags, and Liquids
Daniel Seita (Carnegie Mellon University) 11/18/2022

Abstract: The robotics community has seen significant progress in applying machine learning for robot manipulation. However, much manipulation research focuses on rigid objects instead of highly deformable objects such as ropes, fabrics, bags, and liquids, which pose challenges due to their complex configuration spaces, dynamics, and self-occlusions. To achieve greater progress in robot manipulation of such diverse deformable objects, I advocate for an increased focus on learning and developing appropriate representations for robot manipulation. In this talk, I show how novel action-centric representations can lead to better imitation learning for manipulation of diverse deformable objects. I will show how such representations can be learned from color images, depth images, or point cloud observational data. My research demonstrates how novel representations can lead to an exciting new era for 3D robot manipulation of complex objects.

Biography: Daniel Seita is a postdoctoral researcher at Carnegie Mellon University advised by David Held. His research interests lie in machine learning for robot manipulation, with a focus on developing novel observation and action representations to improve manipulation of challenging deformable objects. Daniel holds a PhD in computer science from the University of California, Berkeley, advised by John Canny and Ken Goldberg. He received his B.A. in math and computer science from Williams College. Daniel's research has been supported by a six-year Graduate Fellowships for STEM Diversity and by a two-year Berkeley Fellowship. He is the recipient of the Honorable Mention for Best Paper award at UAI 2017, the 2019 Eugene L Lawler Prize from the Berkeley EECS department, and was selected to be an RSS 2022 Pioneer.

THANKSGIVING 11/19/2022
Leveraging WiFi for Robust and Resource-Efficient SLAM
Aditya Arun (University of California, San Diego) 12/02/2022

Abstract: Indoor robots can increasingly deliver value in diverse industry segments, including logistics, security, and construction. This demand has consequently increased the importance of robust simultaneous localization and mapping (SLAM) algorithms for indoor robots. This robustness is typically provided by fusing information from visual sensors (LiDARs or cameras) with proprioceptive sensors (odometers or IMUs). However, visual sensors can be sensitive to perceptual aliasing, visually dynamic environments, and changing lighting conditions, resulting in failures in SLAM predictions.

Biography: Aditya Arun is a fourth-year Ph.D. student at the University of California, San Diego, advised by Dinesh Bharadia. He is part of the WCSNG group and the Center for Wireless communications and Contextual Robotics Institute. His larger research vision is to incorporate WiFi and other wireless technologies as sensing modalities to improve the world of robotics and enable robotics to solve real-world problems. His research interests span wireless sensing, robotics, signal processing, and networking. Previously, he completed his B.S. from the University of California, Berkeley.

Aligning Robot Representations with Humans
Andreea Bobu (UC Berkeley) 12/09/2022

Abstract: Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users' input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human’s representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform may be misaligned with what the robot knows. In my work, I explore ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. In this talk I focus on a divide and conquer approach to the robot learning problem: explicitly focus human input on teaching robots good representations before using them for learning downstream tasks. We accomplish this by investigating how robots can reason about the uncertainty in their current representation, explicitly query humans for feature-specific feedback to improve it, then use task-specific input to learn behaviors on top of the new representation.

Biography: Andreea Bobu is a Ph.D. candidate at the University of California Berkeley in the Electrical Engineering and Computer Science Department advised by Professor Anca Dragan. Her research focuses on aligning robot and human representations for more seamless interaction between them. In particular, Andreea studies how robots can learn more efficiently from human feedback by explicitly focusing on learning good intermediate human-guided representations before using them for task learning. Prior to her Ph.D. she earned her Bachelor’s degree in Computer Science and Engineering from MIT in 2017. She is the recipient of the Apple AI/ML Ph.D. fellowship, is a Rising Star in EECS and an R:SS and HRI Pioneer, has won best paper award at HRI 2020, and has worked at NVIDIA Research.