Autumn 2021 Colloquium

Organizers: Patrícia Alves-Oliveira, Maya Cakmak, Karthik Desingh, Dieter Fox, Sam Burden, Siddhartha S. Srinivasa

Robots that Talk and Learn - A Case for Modeling Humans to Design Effective Collaborative AI Systems
Shiwali Mohan (Palo Alto Research Canter - PARC) 10/08/2021

Abstract: The recent successes of AI and ML are now accompanied by an ever-increasing expectation of using those methods to support human goals in a variety of contexts. Intelligent solutions are being explored for complex techno-social problems: from technology for improving health outcomes; computational methods for sustainable transportation; to AI systems that advance human learning. Not surprisingly, the questions related to relationships between intelligent systems and humans have taken center stage in intelligent systems research. In my talk, I will outline a scientific vision for the design and analysis of collaborative human-AI systems. I will situate the discussion in a new intelligent systems problem - interactive task learning for robots - a study of methods that enable a robot to learn a new domain & task knowledge from natural interactions with a human. I will introduce the problem and summarize various advances that we have made in the past years. I will conclude with a brief discussion on why models of human behavior and decision-making are critical components of effective intelligent system design.

Biography: Shiwali Mohan is an AI systems researcher at Xerox PARC. She studies how to design and evaluate human-aware agents - complex decision-making systems that model and reason about their human collaborators. Her research is interdisciplinary and often leverages insights about human behavior from cognitive science, psychology, linguistics, and economics. She has designed intelligent collaborative agents for a variety of application domains including general-purpose robots, preventive healthcare and wellbeing, sustainable transportation, and augmented reality. She is particularly motivated to build intelligent collaborative technology for social good and public welfare. Shiwali was born and brought up in northern India. She has a bachelor's in instrumentation and control engineering from Delhi University. Shiwali received her Ph.D. in computer science in 2015 from the University of Michigan, Ann Arbor where she was part of the Soar cognitive architecture group. She joined Xerox PARC in 2014 as a postdoc and is currently a Senior Member of Research Staff.

Enabling Grounded Language Communication for Human-Robot Teaming
Thomas Howard (University of Rochester) 10/15/2021

Abstract: The ability for robots to effectively understand natural language instructions and convey information about their observations and interactions with the physical world is highly dependent on the sophistication and fidelity of the robot’s representations of language, environment, and actions. As we progress towards more intelligent systems that perform a wider range of tasks in a greater variety of domains, we need models that can adapt their representations of language and environment to achieve the real-time performance necessitated by the cadence of human-robot interaction within the computational resource constraints of the platform. In this talk I will review my laboratory’s research on algorithms and models for robot planning, mapping, control, and interaction with a specific focus on language-guided adaptive perception and bi-directional communication with deliberative interactive estimation.

Biography: Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering at the University of Rochester. He also holds secondary appointments in the Department of Biomedical Engineering and Department of Computer Science, is an affiliate of the Goergen Institute of Data Science and directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory. Previously he held appointments as a research scientist and a postdoctoral associate at MIT's Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech. Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with a research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning, natural language understanding, and human-robot teaming. Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the 2007 DARPA Urban Challenge. Howard has earned Best Paper Awards at RSS (2016) and IEEE SMC (2017), two NASA Group Achievement Awards (2012, 2014), was a finalist for the ICRA Best Manipulation Paper Award (2012) and was selected for the NASA Early Career Faculty Award (2019). Howard’s research at the University of Rochester has been supported by National Science Foundation, Army Research Office, Army Research Laboratory, Department of Defense Congressionally Directed Medical Research Program, National Aeronautics and Space Administration, and the New York State Center of Excellence in Data Science.

Towards Understanding Animal and Robot Locomotion in Complex 3-D Terrain
Chen Li (Johns Hopkins University) 10/22/2021

Abstract: We understand fairly well how terrestrial animals use and control interaction with the environment to generate and stabilize near-steady-state, single-mode locomotion such as walking and running on relatively flat surfaces, and many bio-inspired robots are becoming robust at doing so. It is because we understand such interaction as well as we do that we have Boston Dynamics robots. However, robots are still far from robust in being able to traverse (not avoid) complex 3-D terrain with obstacles as large as themselves, an ability required for applications like search and rescue in rubble and debris, environmental monitoring in forests and mountains, and sample collection through cluttered extraterrestrial rocks. By contrast, many animals do so at ease by dynamically transitioning across different modes of locomotion. Here, we review our lab’s progress towards filling this gap by revealing how to use and control physical interaction with the environment to make locomotor transitions. We take an interdisciplinary, integrative approach at the interface of biology, engineering, and physics. Considering the heterogeneity of complex 3-D terrain, as a first step, we abstract from it distinct model challenges and create platforms analogous to wind/flow tunnels to enable controlled, repeatable experiments. We study legged and limbless model organisms (cockroaches and snakes) capable of robust locomotion in complex 3-D terrain in these platforms and develop techniques to measure locomotor-terrain interaction in detail. We also create and test robotic physical models of the animals to further enable systematically locomotor parameter variation and controlled studies of feedforward and feedback control strategies. Finally, we perform physics modeling to understand how locomotor locomotion emerge from physical interaction and can be controlled by the animal or robot. For the abstracted locomotor challenges, the general physical principles and strategies revealed already advanced robot performance. We are working towards further understanding how to sense physical interaction during locomotion and use feedback control and reactive planning to enable robust transitions across heterogeneous complex 3-D terrain.

Biography: Chen Li is an Assistant Professor in the Department of Mechanical Engineering and a faculty member of Laboratory for Computational Sensing and Robotics at Johns Hopkins University. He earned B.S. and PhD degrees in physics from Peking University and Georgia Tech, respectively, and performed postdoctoral research in Integrative Biology and Robotics at UC Berkeley as a Miller Fellow. Dr. Li’s research aims at creating the new field of terradynamics, analogous to aero- and hydrodynamics, at the interface of interface of biology, robotics, and physics, and using terradynamics to understand animal locomotion and advance robot locomotion in the real world. Dr. Li won several early career awards, including a Burroughs Wellcome Fund Career Award at the Scientific Interface, a Beckman Young Investigator Award, and an Army Research Office Young Investigator Award, and selection as a Kavli Frontiers of Science Fellow by the National Academy of Sciences. He has won a Best PhD Thesis award at Georgia Tech and several best student/highlight/best paper awards (Society for Integrative & Comparative Biology, Bioinspiration & Biomimetics, Advanced Robotics, IROS). To learn more, visit Terradynamics Lab at: https://li.me.jhu.edu/

Towards Robust HRI: A Quality Diversity Approach
Stefanos Nikolaidis (University of Southern California) 10/29/2021

Abstract: The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring the diverse scenarios of interaction between humans and robots in simulation can improve understanding of complex HRI systems and avoid potentially costly failures in real-world settings. In this talk, I propose formulating the problem of automatic scenario generation in HRI as a quality diversity problem, where the goal is not to find a single global optimum, but a diverse range of failure scenarios that explore both environments and human actions. I show how standard quality diversity algorithms can discover interesting and diverse scenarios in the shared autonomy domain. I then discuss the development of a new class of quality diversity algorithms that significantly improve the search of the scenario space, and the integration of these algorithms with generative models that enables the generation of complex and realistic scenarios. Finally, I discuss applications in procedural content generation and human preference learning.

Biography: Stefanos Nikolaidis is an Assistant Professor of Computer Science at the University of Southern California and leads the Interactive and Collaborative Autonomous Robotics Systems (ICAROS) lab. His research focuses on stochastic optimization approaches for learning and evaluation of human-robot interactions. His work leads to end-to-end solutions that enable deployed robotic systems to act robustly interacting with people in practical, real-world applications. Stefanos completed his PhD at Carnegie Mellon's Robotics Institute and received an MS from MIT, a MEng from the University of Tokyo and a BS from the National Technical University of Athens. His research has been recognized in the form of best paper awards and nominations from the IEEE/ACM International Conference on Human-Robot Interaction, the International Conference on Intelligent Robots and Systems and the International Symposium on Robotics.

Assistive Autonomy Revisited
Brenna Argall (Northwestern University) 11/05/2021

Abstract: As need increases, access decreases. It is a paradox that as human motor impairments become more severe, and increasing assistance needs are paired with decreasing motor abilities, the very machines created to provide this assistance become less and less accessible to operate with independence. My lab addresses this paradox by incorporating robotics autonomy and intelligence into physically-assistive machines: leveraging robotics autonomy, to advance human autonomy. Achieving the correct allocation of control between the human and the autonomy is essential, and critical for adoption. The allocation must be responsive to individual abilities and preferences, that moreover can be changing over time, and robust to human-machine information flow that is filtered and masked by motor impairment and control interface. As we see time and again in our work and within the field: customization and adaptation are key, and so the opportunities for machine learning are clear. This talk will overview a sampling of ongoing projects and studies in my lab, with a focus on alternate paradigms for delivering assistive autonomy.

Biography: Brenna Argall is an associate professor of Mechanical Engineering, Computer Science, and Physical Medicine & Rehabilitation at Northwestern University. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago), the #1 ranked rehabilitation hospital in the United States. The mission of the argallab is to advance human ability by leveraging robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award, and was named one of the 40 under 40 by Crain’s Chicago Business. Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her B.S. in Mathematics (2002). Prior to joining Northwestern and RIC, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH). More recently, she was a visiting fellow at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland (2019).

Learning Better Ways to Measure and Move: Joint Optimization of an Agent's Physical Design and Computational Reasoning
Matthew Walter (Toyota Technological Institute at Chicago) 11/12/2021

Abstract: The recent surge of progress in machine learning foreshadows the advent of sophisticated intelligent devices and agents capable of rich interactions with the physical world. Many of these advances focus on building better computational methods for inference and control---computational reasoning methods trained to discover and exploit the statistical structure and relationships in their problem domain. However, the design of physical interfaces through which a machine senses and acts in its environment is as critical to its success as the efficacy of its computational reasoning. Perception problems become easier when sensors provide measurements that are more informative towards the quantities to be inferred. Control policies become more effective when an agent's physical design permits greater robustness and dexterity in its actions. Thus, the problems of physical design and computational reasoning are coupled, and the answer to what combination is optimal naturally depends on the environment the machine operates in and the task before it. I will present learning-based methods that perform automated, data-driven optimization over sensor measurement strategies and physical configurations jointly with computational inference and control. I will first describe a framework that reasons over the configuration of sensor networks in conjunction with the corresponding algorithm that infers spatial phenomena from noisy sensor readings. Key to the framework is encoding sensor network design as a differential neural layer that interfaces with a neural network for inference, allowing for joint optimization using standard techniques for training neural networks. Next, I will present a method that draws on the success of data-driven approaches to continuous control to jointly optimize the physical structure of legged robots and the control policy that enables them to locomote. The method maintains a distribution over designs and uses reinforcement learning to optimize a shared control policy to maximize the expected reward over the design distribution. I will then describe recent work that extends this approach to the coupled design and control of physically realizable soft robots. If time permits, I will conclude with a discussion of ongoing work that seeks to improve test-time generalization of the learned policies.

Biography: Matthew R. Walter is an assistant professor at the Toyota Technological Institute at Chicago. His interests revolve around the realization of intelligent, perceptually aware robots that are able to act robustly and effectively in unstructured environments, particularly with and alongside people. His research focuses on machine learning-based solutions that allow robots to learn to understand and interact with the people, places, and objects in their surroundings. Matthew has investigated these areas in the context of various robotic platforms, including autonomous underwater vehicles, self-driving cars, voice-commandable wheelchairs, mobile manipulators, and autonomous cars for (rubber) ducks. Matthew obtained his Ph.D. from the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution, where his thesis focused on improving the efficiency of inference for simultaneous localization and mapping.

Bridging Safety and Learning in Human-Robot Interaction
Andrea Bajcsy (UC Berkeley) 11/19/2021

Abstract: From autonomous cars in cities to mobile manipulators at home, robots must learn about people in order to effectively interact with them. However, by blindly trusting their learning algorithms, today’s robots confidently plan unsafe behaviors around people, resulting in anything from miscoordination to dangerous collisions. My research aims to ensure safety in human-robot interaction, particularly when robots learn from and about humans. In this talk, I will discuss how treating robot learning algorithms as dynamical systems driven by human data enables safe human-robot interaction. I will first introduce a Bayesian monitor which infers online if the robot's learned model can evolve to well-explain observed human data. I will then discuss how control-theoretic tools enable us to formally quantify what the robot could learn online from human data and how quickly it could learn it. Coupling these ideas with robot motion planning algorithms, I will demonstrate how robots can safely and automatically adapt their behavior based on how trustworthy their learned human models are. I will end this talk by taking a step back and raising the question: “What is the ‘right’ notion of safety when robots interact with people?” and discussing opportunities for how rethinking our notions of safety can capture more subtle aspects of human-robot interaction.

Biography: Andrea Bajcsy is a Ph.D. candidate at UC Berkeley in the Electrical Engineering and Computer Science Department advised by Professors Anca Dragan and Claire Tomlin. She studies safe human-robot interaction, particularly when robots learn from and about people. Her research unites traditionally disparate methods from control theory and machine learning to develop theoretical frameworks and practical algorithms for human-robot interaction in domains like assistive robotic arms, quadrotors, and autonomous cars. Prior to her Ph.D., she earned her B.S. at the University of Maryland, College Park in Computer Science in 2016. She is the recipient of the NSF Graduate Research Fellowship, UC Berkeley Chancellor’s Fellowship, and has worked at NVIDIA Research and Max Planck Institute for Intelligent Systems.

Thanksgiving 11/26/2021
Operational Robotics and AI, at Amazon Scale
Michael Wolf (Amazon) 12/03/2021

Abstract: This talk discusses robotics and computer vision challenges and possibilities in Amazon’s order fulfillment operations, including potential research topics for UW-Amazon collaborations. The scale of Amazon’s vast operations both motivates the need for robotics and serves as the driving requirement for resilient autonomy. In this talk, we will survey key manipulation, mobility, and perception applications at Amazon, how they are impacted by the need to scale, and where we anticipate gaps in the current state of the art. Finally, we will introduce Amazon’s growing “Science Hub” program that seeks to establish partnerships with university researchers on these challenges.

Biography: Dr. Michael Wolf is a robotics technologist developing adaptable, full-system autonomy for safe and resilient robots. He has been with Amazon Robotics AI as a Principal Applied Scientist since 2020. Prior to that, he spent 12 years at the NASA Jet Propulsion Laboratory (JPL), where he was a Directorate Principal for vehicle autonomy & multi-robot systems and the founding manager of the multi-agent autonomy group. Michael has led numerous technology development projects as a PI for NASA and DoD autonomous systems research, with an emphasis on field robotics, rapid tech maturation, and full-scale system demonstrations in relevant environments. Specific research interests include autonomy software architectures, safety-critical motion planning, multi-target tracking and prediction, natural human–robot interfaces, and verification of autonomy. He completed his PhD and MS at Caltech (Mechanical Engineering Robotics lab, with a minor in Controls & Dynamical Systems) and his BS in Mechanical Engineering at Stanford.

Towards Generalization of Precise Robot Skills: Accurate Pick-and-Place of Novel Objects
Maria Bauza Villalonga (MIT) 12/10/2021

Abstract: Reliable robots must understand their environment and act on it with precision. Practical robots should also be able to achieve wide generalization; i.e., a single robot should be capable of solving multiple tasks. For instance, we would like to have, but still lack, a robot that can reliably assemble most IKEA furniture instead of having one robot tailored to each piece of furniture. Towards this, in this talk, I will present an approach to robotic pick-and-place that provides robots with both high-precision and generalization skills. The proposed method only uses simulation to learn probabilistic models for grasping, planning, and localization that transfer to the actual robotic system with high accuracy. In real experiments, we show that our dual-arm robot can exert task-aware picks on new objects, use visuo-tactile sensing to localize them, and perform dexterous placings of these objects that involve in-hand regrasps and tight placing requirements with less than 1mm of tolerance. Overall, our proposed approach can handle new objects and placing configurations, providing the robot with precise generalization skills.

Biography: Maria Bauza Villalonga is a PhD student in Robotics at the Massachusetts Institute of Technology, working with Professor Alberto Rodriguez. Before that, she received Bachelor’s degrees in Mathematics and Physics from CFIS, an excellence center at the Polytechnic University of Catalonia. Her research focuses on achieving precise robotic generalization by learning probabilistic models of the world that allow robots to reuse their skills across multiple tasks with high success. Maria has received several fellowships, including Facebook, NVIDIA, and LaCaixa fellowships. Her research has obtained awards such as Best Paper Finalist in Service Robotics at ICRA 2021, Best Cognitive Paper award at IROS 2018, and Best Paper award finalist at IROS 2016. She was also part of the MIT-Princeton Team participating in the Amazon Robotics Challenge, winning the stowing task in 2017 and receiving the 2018 Amazon Best Systems Paper Award in Manipulation.