Winter 2021 Colloquium

Organizers: Karthik Desingh, Dieter Fox

Scaling Probabilistically Safe Learning to Robotics
Scott Niekum (University of Texas, Austin) 01/29/2021

Abstract: Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. In recent years, so-called "high-confidence" reinforcement learning algorithms have enjoyed success in application areas with high-quality models and plentiful data, but robotics remains a challenging domain for scaling up such approaches. Furthermore, very little work has been done on the even more difficult problem of safe imitation learning, in which the demonstrator's reward function is not known. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards high-confidence robot learning with modest amounts of real-world data.

Biography: Scott Niekum is an Assistant Professor and the director of the Personal Autonomous Robotics Lab (PeARL) in the Department of Computer Science at UT Austin. He is also a core faculty member in the interdepartmental robotics group at UT. Prior to joining UT Austin, Scott was a postdoctoral research fellow at the Carnegie Mellon Robotics Institute and received his Ph.D. from the Department of Computer Science at the University of Massachusetts Amherst. His research interests include imitation learning, reinforcement learning, and robotic manipulation. Scott is a recipient of the 2018 NSF CAREER Award and 2019 AFOSR Young Investigator Award.

Policy Learning in Spatial Action Spaces
Robert Platt (Northeastern University) 02/05/2021

Abstract: While model free reinforcement learning has recently had many successes, it is not yet clear how or whether it is the best approach to planning and control for robotic manipulation. An attractive approach is to give the agent access to a ``spatial action space'' where actions correspond to positions and orientations in SE(2) or SE(3). While this approach enables the agent to reason at a suitably high level of abstraction, it means that the agent must learn in a large and complex action space that can make policy learning difficult. This talk proposes several ideas that can facilitate robotic policy learning in spatial action spaces. The work is evaluated in simulation and on physical systems on challenging manipulation tasks.

Biography: Rob Platt is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University. He is interested in developing robots that can perform complex manipulation tasks alongside humans in the uncertain everyday world. Much of his work is at the intersection of robotic policy learning, planning, and perception. Prior to coming to Northeastern, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center.

Robots That Care: Socially Assistive Robotics and the Future of Work and Care
Maja Mataric (University of Southern California) 02/19/2021

Abstract: The nexus of advances in robotics, NLU, and machine learning has created opportunities for personalized robots for the ultimate robotics frontier: the home. The current pandemic has both caused and exposed unprecedented levels of health & wellness, education, and training needs worldwide, which must increasingly be addressed in the home. The pandemic has also challenged our notions of how effective (as well as inclusive and equitable) our AI, ML, and robotics technologies currently are. This talk will discuss the potential of human-machine interaction in general, and socially assistive robotics in particular, in addressing those challenges. The talk will cover human-robot interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive and persuasive robot behavior to monitor, coach, and motivate users to engage in health, wellness, education and training activities. Methods and results will be presented that include modeling, learning, and personalizing the enhancement of user motivation, engagement, and coaching, with validation from studies involving healthy children and adults, stroke patients, Alzheimer's patients, and children with autism spectrum disorders, and other user populations, in short and long-term (month+) deployments in schools, therapy centers, and homes. Outstanding research challenges and commercialization pathways will be also be discussed.

Biography: Maja Matarić is Chan Soon-Shiong Distinguished Professor of Computer Science, Neuroscience, and Pediatrics at USC, founding director of the Robotics and Autonomous Systems Center and interim Vice President of Research. Her PhD and MS are from MIT, BS from the University of Kansas. She is Fellow of AAAS, IEEE, AAAI, and ACM, recipient of the US Presidential Award for Excellence in Science, Mathematics & Engineering Mentoring from President Obama, Anita Borg Institute Women of Vision, NSF Career, MIT TR35 Innovation, and IEEE RAS Early Career Awards. She is active in K-12 and diversity outreach. A pioneer of socially assistive robotics, her lab’s research is developing personalized human-robot interaction methods for convalescence, rehabilitation, training, and education that have been validated in autism, stroke, Alzheimer’s, and other domains. She is also co-founder of Embodied, Inc.

Enhancing Human Capability with Intelligent Machine Teammates
Julie Shah (MIT) 02/26/2021

Abstract: Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates. I will also discuss ongoing efforts in the MIT Schwarzman College of Computing to advance Social and Ethical Responsibilities of Computing (SERC) in the teaching, research, and implementation of computing.

Biography: Julie Shah is associate dean of Social and Ethical Responsibilities of Computing at MIT, a Professor of Aeronautics and Astronautics, and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She is expanding the use of human cognitive models for artificial intelligence and has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.

Semantic Robot Programming... and Maybe Making the World a Better Place
Chad Jenkins (University of Michigan) 03/05/2021

Abstract: The visions of interconnected heterogeneous autonomous robots in widespread use are a coming reality that will reshape our world. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers. In order for people to fluently program autonomous robots, a robot must be able to interpret user instructions that accord with that user’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit a critical missing component is the grounding of semantic symbols in a manner that addresses both uncertainty in low-level robot perception and intentionality in high-level reasoning. Such a grounding will enable robots to fluidly work with human collaborators to perform tasks that require extended goal-directed autonomy. I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

Biography: Odest Chadwicke Jenkins, Ph.D., is a Professor of Computer Science and Engineering and Associate Director of the Robotics Institute at the University of Michigan. Prof. Jenkins earned his B.S. in Computer Science and Mathematics at Alma College (1996), M.S. in Computer Science at Georgia Tech (1998), and Ph.D. in Computer Science at the University of Southern California (2003). He previously served on the faculty of Brown University in Computer Science (2004-15). His research addresses problems in interactive robotics and human-robot interaction, primarily focused on mobile manipulation, robot perception, and robot learning from demonstration. His research often intersects topics in computer vision, machine learning, and computer animation. Prof. Jenkins has been recognized as a Sloan Research Fellow and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). His work has also been supported by Young Investigator awards from the Office of Naval Research (ONR), the Air Force Office of Scientific Research (AFOSR) and the National Science Foundation (NSF). Prof. Jenkins is currently serving as Editor-in-Chief for the ACM Transactions on Human-Robot Interaction. He is a Fellow of the American Association for the Advancement of Science, and Senior Member of the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers. He is an alumnus of the Defense Science Study Group (2018-19).

Structuring Manipulation Tasks for More Efficient Learning
Oliver Kroemer (CMU) 03/12/2021

Abstract: In the future, we want to create robots with the robustness and versatility to operate in unstructured and everyday environments. To achieve this goal, robots will need to learn manipulation skills that can be applied to a wide range of objects and task scenarios. In this talk, I will be presenting recent work from my lab on structuring manipulation tasks for more efficient learning. I will begin by discussing how modularity can be used to break down challenging manipulation tasks to learn general object-centric solutions. I will then focus on the question of: what to learn? I will discuss how robots can use model-based reasoning to identify relevant context parameters for adapting skills as well as determining when to learn a skill. I will conclude by discussing how robots can use interactions and multimodal sensing to learn manipulation-oriented representations of different materials.

Biography: Dr. Oliver Kroemer is an assistant professor at the Carnegie Mellon University (CMU) Robotics Institute where he leads the Intelligent Autonomous Manipulation Lab. His research focuses on developing algorithms and representations to enable robots to learn versatile and robust manipulation skills. Before joining CMU, Dr. Kroemer was a postdoctoral researcher at the University of Southern California (USC) for two and a half years. He received his Masters and Bachelors degrees in engineering from the University of Cambridge in 2008. From 2009 to 2011, he was a Ph.D. student at the Max Planck Institute for Intelligent Systems. He defended his Ph.D. thesis on Machine Learning for Robot Grasping and Manipulation in 2014 at the Technische Universitaet Darmstadt