Winter 2019 Colloquium

Organizers: Tapomayukh Bhattacharjee, Maya Cakmak, Dieter Fox, Siddhartha S. Srinivasa

Compliant Robots Without Complaints: Integrating Mechanics + Differential Geometry + Planning for Quick and Precise Compliant Robot Design
Surya Singh (The University of Queensland (UQ)) 01/11/2019

Abstract: Compliant robotic systems find broad application from manufacturing to surgery. While they are amenable and soft, their planning/controls are typically slow and hard. In a typical application, over 80% of the planning time is used for just evaluating the forward model. Coupled with large deformations and continuous action spaces, the computational load can grow exponentially. To address this shortcoming, this talk will introduce the principled (Lie-symmetric) motion compensation in which search is directed along Lie Subgroups and Orbits so as to reuse previously computed paths and also to help find solutions amongst goal sets (instead of a single goal state). This is integrated via a belief-space model with non-linear biomechanical models that predict tissue motion in the presence of large strain. Its application to minimally invasive robotic surgery (a technology that affects more than 800,000 people annually) is considered. Accuracy is challenging herein as inserting a needle displaces the tissue and moves the target. Intraoperative imaging provides limited guidance and coming generation of surgical robots has a precision finer than the best imaging. Here fast forecasting informs robot design and operation by allowing for more compliance and latency.

Biography: Dr. Surya Singh is at The University of Queensland (UQ) and heads the Robotics Design Lab (RDL). His research interests lie in the design and control of compliant systems in novel (dynamic, non-Lambertian) environments that challenge traditional assumptions. Recent results include methods for fast decision making under uncertainty, sub-mm needle placement, and adaptive aids for the visually impaired. His robotics course and question-based peer review teaching software (OpenPlatypus.org) have received university teaching awards. His long-term goal is to democratize robotics and robotics education beyond its hackneyed stereotype and into the milieu.

Building Lifelike Physical Characters
Katsu Yamane (Honda Research Institute) 01/18/2019

Abstract: Entertainment is one of many applications of robotics, but it is unique in the sense that the "task" is to make people believe that they are not watching a robot, but rather a living character with personality and emotion. Pursuing speed, power, accuracy, or even efficiency often does not make much sense in such application. It therefore requires a completely different design paradigm from traditional robotics for both hardware and software. In this talk, I will discuss three elements that I believe are important for entertainment robots: motion, interaction, and design. The first part of the talk introduces various human-to-robot motion retargeting techniques for creating stylistic and expressive motions of both humanoid and non-humanoid characters. In the second part, I will demonstrate that simple, remote human-robot interaction such as playing catch and handing over an object can be engaging and entertaining by adding simple and quick reactions to human actions and events. Finally, I will introduce a few hardware prototypes of soft robots developed with the goal of realizing safe direct physical interactions including hand-shaking and hugging.

Biography: Dr. Katsu Yamane is a Senior Scientist at Honda Research Institute USA. He received his B.S., M.S., and Ph.D. degrees in Mechanical Engineering in 1997, 1999, and 2002 respectively from the University of Tokyo, Japan. Prior to joining Honda in 2018, he was a Senior Research Scientist at Disney Research, an Associate Professor at the University of Tokyo, and a postdoctoral fellow at Carnegie Mellon University. Dr. Yamane is a recipient of King-Sun Fu Best Transactions Paper Award and Early Academic Career Award from IEEE Robotics and Automation Society, and Young Scientist Award from Ministry of Education, Japan. His research interests include humanoid robot control and motion synthesis, physical human-robot interaction, character animation, and human motion simulation.

Human-Collective Teams: Algorithms, Transparency and Resilience
Julie A. Adams (Oregon State University) 01/25/2019

Abstract: Biological inspiration for artificial systems abounds. The science to support robotic collectives continues to emerge based on their biological inspirations, spatial swarms (e.g., fish and starlings) and colonies (e.g., honeybees and ants). Developing effective human-collective teams requires focusing on all aspects of the integrated system development. Many of these fundamental aspects have been developed independently, but our focus is an integrated development process to these complex research questions. This presentation will focus on three aspects: algorithms, transparency and resilience for collectives. Very large numbers of simplistic individuals use biologically inspired algorithms to solve more complex problems, this presentation will focus on a sequential best-of-n target selection algorithm. The size and complexity of these systems precludes a human’s ability to fully understand and communicate with each individual. Thus, transparency into the collective’s state and influencing its actions are a significant challenge that requires a close coupling with the underlying algorithms. This presentation will demonstrate a means of providing transparency and permitting influence over the collectives best-of-n decision making process. Finally, biological collectives are highly resilient to system disruptions, a feature that is an underlying expectation of robotic swarms. The questions to be addressed include how to ensure such resilience exists and how to assess this characteristic in robotic collectives.

Biography: Dr. Julie A. Adams, Professor, Associate Director of the Collaborative Robotics and Intelligent Systems Institute, Oregon State University. Dr. Adams was the founder of the Human-Machine Teaming Laboratory at Vanderbilt University, prior to moving the laboratory to Oregon State. Adams has worked in the area of human-machine teaming for almost thirty years. Throughout her career she has focused on human interaction with unmanned systems, but also focused on manned civilian and military aircraft at Honeywell, Inc. and commercial, consumer and industrial systems at the Eastman Kodak Company. Her research, which is grounded in robotics applications for domains such as first response, archaeology, oceanography, the national airspace and the U.S. military, focuses on distributed artificial intelligence, swarms, robotics and human-machine teaming. Adams received her M.S. and Ph.D. degrees in Computer and Information Sciences from the University of Pennsylvania and her B.S. in Computer Science and B.B.E. in Accounting from Siena College.

Efficient Robot Skill Learning: Grounded Simulation Learning and Imitation Learning from Observation
Peter Stone (UT Austin) 02/01/2019

Abstract: For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Grounded Simulation Learning has led to the fastest known stable walk on a widely used humanoid robot, and imitation learning from observation opens the possibility of robots learning from the vast trove of videos available online.

Biography: I am the founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, as well as associate department chair and chair of the University's Robotics Portfolio Program. I am also the President, COO, and co-founder of Cogitai, Inc. My main research interest in AI is understanding how we can best create complete intelligent agents. I consider adaptation, interaction, and embodiment to be essential capabilities of such agents. Thus, my research focuses mainly on machine learning, multiagent systems, and robotics. To me, the most exciting research topics are those inspired by challenging real-world problems. I believe that complete successful research includes both precise, novel algorithms and fully implemented and rigorously evaluated applications. My application domains have included robot soccer, autonomous bidding agents, autonomous vehicles, autonomic computing, and social agents.

Security and Communication for Multi-Robot Systems through Coordinated Control
Stephanie Gil (Arizona State University - Talk CANCELLED due to Campus Closure) 02/08/2019

Abstract: Robust information exchange and trusted coordination are both critical needs for multi-robot systems acting in the real world. While these needs are universal across platforms, the computing and sensing resources of these platforms are not – making effective coordination difficult to enable, to scale, and to secure. This talk will present new methods of security and adaptive network formation for resource-constrained, mobile multi-robot systems (applications include delivery drones, mobile IoT, and robotic vehicles). The focus of this work is at the intersection of robotics and communication, and in particular, we study ways that communication technologies can be used to make resource-constrained multi-robot systems more capable. This talk will touch upon our developed technologies in 1) position control algorithms for multiple robots to achieve high data rate networks and 2) development of a virtual sensor for bi-directional Synthetic Aperture Radar between two communicating agents. Building upon these technologies, we develop a theoretical and experimental framework for provably thwarting spoofing attacks using communicated wireless signals in various important multi-agent tasks such as consensus, coverage, and drone delivery. This talk will have a particular focus on our most recent results in securing multi-agent consensus.

Biography: Stephanie is currently an Assistant Professor in the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University (Jan 2018). Prior, she was a research scientist in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT where she also completed her Ph.D. work (2014) on multi-robot coordination and control and M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.

Collaborative thinking and motion: from collective transport to autonomous driving cars
Golnaz Habibi (MIT) 02/15/2019

Abstract: Robots are built to help humans improve the quality of life, from household tasks to intelligent transportation. This talk presents two important tasks that can be accomplished by the help of robots: collectively object manipulation and autonomous transportation. This talk first describes collective transport by multi agent and why this problem is difficult in general. An end-to-end fully distributed algorithm is presented to collectively retrieve a large object from an unknown GPS-denied environment a by a group of robots with limited sensing. This talk demonstrates all steps of collective transport from exploration and finding the object to planning the navigation path and manipulating the object among obstacles. The second part of the talk is to present a transferable model for accurate prediction pedestrian trajectories, as vulnerable road users, in crowded intersection corners or near crosswalks. Given prior knowledge of curbside geometry, the presented framework can accurately predict pedestrian trajectories even in new, unseen intersections. This is achieved by learning motion primitives in a common frame, called the curbside coordinate frame. Context features including pedestrian traffic light and distance to the curbside, enable us to build a transferable prediction model, which can be useful for individual autonomous cars or connected vehicles. The transferable model not only provides a framework to incrementally learn new motion behaviors, but the knowledge can also transfer and be shared between connected cars.

Biography: I‘m a postdoctoral researcher associate in aerospace control laboratory, MIT. I got my PhD in Computer science from Rice university. I have worked on broad aspects in robotics in AI, from multi agent systems, motion planning, to self-driving cars. During my PhD, I have designed distributed algorithms for multi agents and implemented on real robots. My PhD thesis focused on multi robot manipulation by a group of robots and multi robot recovery. Currently, I’m working on autonomous driving cars, specifically prediction the motion of pedestrian trajectories in crowded areas. I’m interested in designing a generalized transferable model to estimate the motion of vulnerable road users to improve the efficiency and safety of autonomous vehicles.

Learning Model-free Representations for Solving Robot Control and Planning Problems
Michael Yip (UCSD) 02/22/2019

Abstract: Robots currently lack a strong set of algorithmic tools to deal with uncertainty and dynamic environments, whether it be in the home, in a semi-automated warehouse, or in a robotic surgical operating room. Unlike the past decade of robot applications that primarily focused on highly repetitive assembly line tasks, the robots of the future will need to interact with new and changing environments. In this talk, I will discuss our research in learning model-free representations for robots that enable robots to learn and adapt their control to new environments and conditions, and perform fast motion planning and adaptation to changing environments. These representations are trained using a variety of local and global model-free learning strategies, and when implemented are comparatively significantly faster, more consistent, and more power and memory efficient than state-of-art robots. We show how these problems can be used to present new solutions to robot manipulation, soft robots, and robotic surgery.

Biography: Michael Yip is an Assistant Professor of Electrical and Computer Engineering at UC San Diego and directs the Advanced Robotics and Controls Laboratory (ARCLab). His group currently focuses on solving problems in data-efficient and computationally efficient robot control and motion planning through the use of various forms of learning representations, including deep learning and reinforcement learning strategies. His lab applies these ideas to surgical robotics and the automation of surgical procedures. Previously, Dr. Yip's research has investigated different facets of model-free control, planning, haptics, soft robotics and computer vision strategies, all towards achieving automated surgery. Dr. Yip's work has been recognized through several best paper awards at ICRA, including the 2016 best paper award for IEEE Robotics and Automation Letters. Dr. Yip has previously been a research associate with Disney Research in Los Angeles involved in animatronics design, and most recently held a consulting position with Amazon Robotics Machine Learning and Computer Vision Research Group in Seattle. He received a B.Sc. in Mechatronics Engineering from the University of Waterloo, an M.S. in Electrical Engineering from the University of British Columbia, and a Ph.D. in Bioengineering from Stanford University.

Formalizing Teamwork in Human-Robot Interaction
Ross Knepper (Cornell University) 03/01/2019

Abstract: Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication. In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid covergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

Biography: Ross A. Knepper is an Assistant Professor in the Department of Computer Science at Cornell University, where he directs the Robotic Personal Assistants Lab. His research focuses on the theory and algorithms of human-robot interaction in collaborative work. He builds systems to perform complex tasks where partnering a human and robot together is advantageous for both, such as factory assembly or home chores. Knepper has built robot systems that can assemble Ikea furniture, ask for help when something goes wrong, interpret informal speech and gesture commands, and navigate in a socially-competent manner among people. Before Cornell, Knepper was a Research Scientist at MIT. He received his Ph.D. in Robotics from Carnegie Mellon University in 2011.

TBD
Elin Bjorling & Emma Rose 03/08/2019
A tale of two experiences: Comparing robotics methodology in research and industry using model-free and model-based RL
James Davidson (Thirdwave Automation) 03/15/2019

Abstract: Does robotics research vary from development? What approaches gain the most traction? How is government/industrial research different than at a startup? In this talk, I share the tale two different tech solutions--model-free and model-based RL--and discuss my experiences conducting robotics research in large technology companies and an emerging startup. This talk delves into model-free, robust adversarial reinforcement learning (RARL) and the more speculative, model-based PlaNet approach. This talk provides perspective on research and development across different institutions beyond academia.

Biography: James Davidson graduated with his doctorate from University of Illinois specializing in robot learning. His passion for control theory has melded with his interest in machine learning. In his career, James has conducted research at Sandia National Laboratories, MITRE, Google, and Google Brain—both government and industrial research. His experience has ranged from space robotics to forklifts, and almost everything in between. James was recently bitten by the Silicon Valley bug and left pure research behind to found a company, Third Wave Automation, that focuses on industrial robotics.