Spring 2017 Colloquium

Organizers: Aaron Walsman, Sam Burden, Maya Cakmak, Dieter Fox

Simple, Robust Interaction Between Humans and Teams of Robots
Richard Vaughan (Simon Fraser University) 03/31/2017

Abstract: Sensing technology for robots has improved dramatically in the last few years, but we do not see robots around us yet. How should robots behave around people and each other to get things done? My group works on behavioural strategies for mobile robots that exploit the new sensing capabilities, and allows them to perform sophisticated, robust interactions with the world and other agents. I’ll show videos of a series of novel vision-mediated Human-Robot Interactions with teams of driving and flying robots. At their best, the robots work like those in sci-fi movies. Others need more work, but the robots are always autonomous, the humans are uninstrumented, the interactions surprisingly simple, and we often work outdoors over long distances.

Biography: Richard Vaughan directs the Autonomy Lab at Simon Fraser University. His research interests include long-term autonomous robots, multi-robot systems, behavioural ecology, human-robot interaction (HRI), and robotics software. He demonstrated the first robot to control animal behaviour in 1998, co-created the Player/Stage Project in 2000, and recently showed the first uninstrumented HRI with UAVs. He currently serves on the Administrative Committee of the IEEE Robotics and Automation Society, and the editorial board of the Autonomous Robots journal, and is the Program Chair for IROS 2017.

Ocean One: A Robotic Avatar for Oceanic Discovery
Oussama Khatib (Stanford University) 04/07/2017

Abstract: The promise of oceanic discovery has intrigued scientists and explorers for centuries, whether to study underwater ecology and climate change, or to uncover natural resources and historic secrets buried deep at archaeological sites. The quest to explore the ocean requires skilled human access. Reaching these depth is imperative since factors such as pollution and deep-sea trawling increasingly threaten ecology and archaeological sites. These needs demand a system deploying human-level expertise at the depths, and yet remotely operated vehicles (ROVs) are inadequate for the task. A robotic avatar could go where humans cannot, while embodying human intelligence and intentions through immersive interfaces. To meet the challenge of dexterous operation at oceanic depths, in collaboration with KAUST’s Red Sea Research Center and MEKA Robotics, we developed Ocean One, a bimanual force-controlled humanoid robot that brings immediate and intuitive haptic interaction to oceanic environments. Teaming with the French Ministry of Culture’s Underwater Archaeology Research Department, we deployed Ocean One in an expedition in the Mediterranean to Louis XIV’s flagship Lune, lying off the coast of Toulon at ninety-one meters. In the spring of 2016, Ocean One became the first robotic avatar to embody a human’s presence at the seabed. This expedition demonstrated synergistic collaboration between a robot and a human operating over challenging manipulation tasks in an inhospitable environment. Tasks such as coral-reef monitoring, underwater pipeline maintenance, and offshore and marine operations will greatly benefit from such robot capabilities. Ocean One’s journey in the Mediterranean marks a new level of marine exploration: Much as past technological innovations have impacted society, Ocean One’s ability to distance humans physically from dangerous and unreachable work spaces while connecting their skills, intuition, and experience to the task promises to fundamentally alter remote work. We foresee that robotic avatars will search for and acquire materials in hazardous and inhospitable settings, support equipment at remote sites, build infrastructure for monitoring the environment, and perform disaster prevention and recovery operations— be it deep in oceans and mines, at mountain tops, or in space.

Biography: Oussama Khatib received his PhD from Sup’Aero, Toulouse, France, in 1980. He is Professor of Computer Science at Stanford University. His research focuses on methodologies and technologies in human-centered robotics including humanoid control architectures, human motion synthesis, interactive dynamic simulation, haptics, and human-friendly robot design. He is a Fellow of IEEE. He is Co-Editor of the Springer Tracts in Advanced Robotics (STAR) series and the Springer Handbook of Robotics, which received the PROSE Award for Excellence in Physical Sciences & Mathematics. Professor Khatib is the President of the International Foundation of Robotics Research (IFRR). He has been the recipient of numerous awards, including the IEEE RAS Pioneer Award in Robotics and Automation, the IEEE RAS George Saridis Leadership Award in Robotics and Automation, the IEEE RAS Distinguished Service Award, and the Japan Robot Association (JARA) Award in Research and Development.

Learning via Interaction for Machine Perception and Control
Debadeepta Dey (Microsoft Research) 04/14/2017

Abstract: As autonomous robots of all shapes and sizes proliferate in the world and start working in increasing proximity to humans it is critical that they produce safe intelligent behavior while efficiently learning from limited interactions in such computationally constrained regimes. A reoccurring problem is considering a limited number of actions from a very large number of possible actions. Examples include grasp selection in robotic manipulation, where the robot arm must evaluate a sequence of grasps with the aim of finding one which is successful as early on in the sequence as possible, or trajectory selection for mobile ground robots, where the task is to select a sequence of trajectories from a much larger set of feasible trajectories for minimising expected cost of traversal. A learning algorithm must therefore be able to predict a budgeted number of decisions which optimises a utility function of interest. Traditionally machine learning has focused on producing a single best prediction. We build an efficient framework for making multiple predictions where the objective is to optimise any utility function which is (monotone) submodular over a sequence of predictions. For each of these cases we optimise for the content and order of the sequence. We demonstrate the efficacy of these methods on several real world robotics problems. Another closely related problem is the budgeted information gathering problem, where a robot with a fixed fuel budget is required to maximise the amount of information gathered from the world, appears in practice across a wide range of applications in autonomous exploration and inspection with mobile robots. We present an efficient algorithm that trains a policy on the target distribution to imitate a clairvoyant oracle - an oracle that has full information about the world and computes non-myopic solutions to maximise information gathered. Additionally, our analysis provides theoretical insight into how to efficiently leverage imitation learning in such settings. Our approach paves the way forward for efficiently applying data-driven methods to the domain of information gathering.

Biography: Debadeepta Dey is a researcher in the Adaptive Systems and Interaction (ASI) group at Microsoft Research, Redmond, USA. He received his doctorate in 2015 at the Robotics Institute, Carnegie Mellon University, Pittsburgh, USA, where he was advised by Prof. J. Andrew (Drew) Bagnell. He does fundamental as well as applied research in machine learning, control and computer vision motivated by robotics problems. He is especially interested in bridging the gap between perception and planning for autonomous ground and aerial vehicles. His interests include decision-making under uncertainty, reinforcement learning, and machine learning. From 2007 to 2010 he was a researcher at the Field Robotics Center, Robotics Institute, Carnegie Mellon University.

Efficient Lifelong Machine Learning: an Online Multi-Task Learning Perspective
Eric Eaton (University of Pennsylvania) 04/21/2017

Abstract: Lifelong learning is a key characteristic of human intelligence, largely responsible for the variety and complexity of our behavior. This process allows us to rapidly learn new skills by building upon and continually refining our learned knowledge over a lifetime of experience. Incorporating these abilities into machine learning algorithms remains a mostly unsolved problem, but one that is essential for the development of versatile autonomous systems. In this talk, I will present our recent progress in developing algorithms for lifelong machine learning for classification, regression, and reinforcement learning, including applications to optimal control for robotics. These algorithms approach the problem from an online multi-task learning perspective, acquiring knowledge incrementally over consecutive learning tasks, and then transferring that knowledge to rapidly learn to solve new tasks. Our approach is highly efficient, scaling to large numbers of tasks and amounts of data, and provides a variety of theoretical guarantees. I will also discuss our work toward autonomous cross-domain transfer between diverse tasks, and zero-shot transfer learning from task descriptions.

Biography: Eric Eaton is a non-tenure-track faculty member in the Department of Computer and Information Science at the University of Pennsylvania, and a member of the GRASP (General Robotics, Automation, Sensing, & Perception) lab. Prior to joining Penn, he was a visiting assistant professor at Bryn Mawr College, a senior research scientist at Lockheed Martin Advanced Technology Laboratories, and part-time faculty at Swarthmore College. His primary research interests lie in the fields of machine learning, artificial intelligence, and data mining with applications to robotics, environmental sustainability, and medicine.

e-Intangible Heritage, from Dancing robots to Cyber Humanities
Katsu Ikeuchi (Microsoft Research) 04/21/2017

Abstract: Tangible heritage, such as temples and statues, is disappearing day-by- day due to human and natural disaster. In-tangible heritage, such as folk dances, local songs, and dialects, has the same story due to lack of inheritors and mixing cultures. We have been developing methods to preserve such tangible and in-tangible heritage in the digital form. This project, which we refer to as e-Heritage, aims not only record heritage, but also analyze those recorded data for better understanding as well as display those data in new forms for promotion and education. This talk, the first talk of e-Heritage Project, covers our effort for handling in- tangible heritage. We are developing a method to preserve folk dances by the performance of dancing robots. Here, we follow the paradigm, learning-from- observation, in which a robot learns how to perform a dance from observing a human dance performance. Due to the physical difference between a human and a robot, the robot cannot exactly mimic the human actions. Instead, the robot first extracts important actions of the dance, referred to key poses, and then symbolically describes them using Labanotation, which the dance community has been using for recording dances. Finally, this labanotation is mapped to each different robot hardware for reconstructing the original dance performance. The second part tries to answer the question, what is the merit to preserve folk dances by using robot performance by the answer that such symbolic representations for robot performance provide new understandings of those dances. In order to demonstrate this point, we focus on folk dances of native Taiwanese, which consists of 14 different tribes. We have converted those folk dances into Labanotation for robot performance. Further, by analyzing these Labanotations obtained, we can clarify the social relations among these 14 tribes.

Biography: Dr. Katsushi Ikeuchi is a Principal Researcher of Microsoft Research Asia, stationed at Microsoft Redmond campus. He received a Ph.D. degree in Information Engineering from the University of Tokyo in 1978. After working at Artificial Intelligence Lab of Massachusetts Institute of Technology as a pos-doc fellows for three years, Electrotechnical Lab of Japanese Government as a researcher for five years, Robotics Institute of Carnegie Mellon University as a faculty member for ten years, Institute of Industrial Science of the University of Tokyo as a faculty member for nineteen years, he joined Microsoft Research Asia in 2015. His research interest spans computer vision, robotics, and computer graphics. He has received several awards, including IEEE-PAMI Distinguished Researcher Award, the Okawa Prize from the Okawa foundation, and Si-Ju- Ho-Sho (the Medal of Honor with Purple ribbon) from the Emperor of Japan. He is a fellow of IEEE, IEICE, IPSJ, and RSJ.

Object Based Mapping
Henrik Christensen (UC San Diego) 04/28/2017

Abstract: To build mobile systems that can operate autonomously it is necessary to endow them with a sense of location. One of the basic aspects of autonomy is the ability to not get lost. How can we build robots that acquire a model of the surrounding world and utilize these models to achieve their mission without getting lost along the way. Simultaneous Localization and Mapping (SLAM) is widely used to provide the mapping and localization compete to robots. The process has three facets: extraction of features from sensor data, association of features with prior detected structures and estimation of position/pose and update of the map to make it current. The estimation part of the process is today typically performed using graphical models to allow for efficient computations and enable flexible handling of ambiguous situations. Over time the feature extraction has matured from use of basic features such as lines and corners to utilization of significant structures such as man-made objects (building, chairs, tables, cars, ...) that are easily identifiable. The discriminative nature of major structures simplifies data-association and facilities more efficient loop-closing. In this presentation we will discuss our modular mapping framework - OmniMapper - and how it can be utilized across a range of different applications for efficient computing. We will discuss a number of different strategies for object detection and pose estimation and also provide examples of mapping across a number of different sensory modalities. Finally we will show a number of examples of use of the OmniMapper across in- and out-door settings using air and ground vehicles.

Biography: Dr. Henrik I. Christensen is a Professor in Dept. of Computer Science and Engineering, UC San Diego. He is also the director of the Institute for Contextual Robotics. Prior to UC San Diego he was the founding director of Institute for Robotics and Intelligent machines (IRIM) at Georgia Institute of Technology (2006-2016). Dr. Christensen does research on systems integration, human-robot interaction, mapping and robot vision. The research is performed within the Cognitive Robotics Laboratory. He has published more than 350 contributions across AI, robotics and vision. His research has a strong emphasis on "real problems with real solutions". A problem needs a theoretical model, implementation, evaluation, and translation to the real world. He is actively engaged in the setup and coordination of robotics research in the US (and worldwide). Dr. Christensen received the Engelberger Award 2011, the highest honor awarded by the robotics industry. He was also awarded the "Boeing Supplier of the Year 2011". Dr. Christensen is a fellow of American Association for Advancement of Science (AAAS) and Institute of Electrical and Electronic Engineers (IEEE). He received an honorary doctorate in engineering from Aalborg University 2014. He collaborates with institutions and industries across three continents. His research has been featured in major media such as CNN, NY Times, and BBC.

Reactive Robotic Manipulation
Alberto Rodriguez (MIT) 05/05/2017

Abstract: The main goal of this talk is to motivate the need for feedback control and contact sensing in robotic grasping and manipulation. I'll start by briefing on recent work by team MIT-Princeton in the Amazon Robotics Challenge, and the lack of practical solutions that exploit feedback and contact sensing. Some of the key challenges to control contact interaction are hybridness, underactuation, and an effective use of tactile sensing. I’ll discuss these challenges in the context of the pusher-slider system, a classical simple problem where the purpose is to control the motion of an object sliding on a flat surface. I like to think of the pusher-slider problem as playing a role in robotic manipulation analogous to the inverted pendulum in classical control. It incorporates many of the challenges present in robotic manipulation tasks: noisy friction, instability, hybridness and underactuation. I will finish by discussing ongoing work and future directions in my group exploring strategies for real-time state estimation and control through frictional intermittent contact.

Biography: Alberto Rodriguez is the Walter Henry Gale (1929) Career Development Professor at the Mechanical Engineering Department at MIT. Alberto graduated in Mathematics ('05) and Telecommunication Engineering ('06) from the Universitat Politecnica de Catalunya (UPC) in Barcelona, and earned his PhD in Robotics (’13) from the Robotics Institute at Carnegie Mellon University. He spent a year in the Locomotion group at MIT, and joined the faculty at MIT in 2014, where he started the Manipulation and Mechanisms Lab (MCube). Alberto is the recipient of the Best Student Paper Awards at conferences RSS 2011 and ICRA 2013 and Best Paper finalist at IROS 2016. His main research interests are in robotic manipulation, mechanical design, and automation.

Mobile Manipulators for Intelligent Physical Assistance
Charlie Kemp (Georgia Tech) 05/12/2017

Abstract: Since founding the Healthcare Robotics Lab at Georgia Tech 10 years ago, my research has focused on developing mobile manipulators for intelligent physical assistance. Mobile manipulators are mobile robots with the ability to physically manipulate their surroundings. They offer a number of distinct capabilities compared to other forms of robotic assistance, including being able to operate independently from the user, being appropriate for users with diverse needs, and being able to assist with a wide variety of tasks, such as object retrieval, hygiene, and feeding. We’ve worked with hundreds of representative end users - including older adults, nurses, and people with severe motor impairments - to better understand the challenges and opportunities associated with this technology. Among other points, I’ll provide evidence for the following assertions: 1) many people will be open to assistance from mobile manipulators; 2) assistive mobile manipulation at home is feasible for people with severe motor impairments using conventional interfaces; and 3) permitting contact and intelligently controlling forces increases the effectiveness of mobile manipulators. I’ll conclude with a brief overview of some of our most recent research.

Biography: Charles C. Kemp (Charlie) is an Associate Professor at the Georgia Institute of Technology in the Department of Biomedical Engineering with adjunct appointments in the School of Interactive Computing and the School of Electrical and Computer Engineering. He earned a doctorate in Electrical Engineering and Computer Science (2005), an MEng, and BS from MIT. In 2007, he joined the faculty at Georgia Tech where he directs the Healthcare Robotics Lab ( http://healthcare-robotics.com ). He is an active member of Georgia Tech’s Institute for Robotics & Intelligent Machines (IRIM) and its multidisciplinary Robotics Ph.D. program. He has received a 3M Non-tenured Faculty Award, the Georgia Tech Research Corporation Robotics Award, a Google Faculty Research Award, and an NSF CAREER award. He was a Hesburgh Award Teaching Fellow in 2017. His research has been covered extensively by the popular media, including the New York Times, Technology Review, ABC, and CNN.

Rethinking Perception-Action Loops
Karol Hausman (University of Southern California) 05/19/2017

Abstract: While perception has traditionally served action in robotics, it has been argued for some time that intelligent action generation can benefit perception, and carefully coupling perception with action can improve the performance of both. In this talk, I will report on recent progress in model-based and learning-based approaches that address aspects of the problem of closing perception-action loops. The first part of my talk will focus on a model-based, active perception technique that optimizes trajectories for self-calibration. This method takes into account motion constraints and produces an optimal trajectory that yields fast convergence of estimates of the self-calibration states and other user-chosen states. In the second part of my talk, I will present a deep reinforcement learning framework that learns manipulation skills on a real robot in a reasonable amount of time. The method handles contact and discontinuities in dynamics by combining the efficiency of model-based techniques and the generality of model-free reinforcement learning techniques.

Biography: Karol Hausman is a Ph.D. student at the University of Southern California in the Robotic Embedded Systems Lab under the supervision of Prof. Gaurav S. Sukhatme. His research interests lie in the field of interactive perception, reinforcement learning, and state estimation in robotics. He received his B.E. and M.E. degrees in Mechatronics from the Warsaw University of Technology, Poland, in 2010 and 2012, respectively. In 2013 he graduated with an M.Sc. degree in Robotics, Cognition and Intelligence from the Technical University Munich. During his Ph.D., he interned with Bosch Research Center Palo Alto, NASA JPL, Qualcomm Research and he will join Google DeepMind for an internship this Summer.

Neuromorphic Planning and Control of Insect-scale Robots
Silvia Ferrari (Cornell University) 05/26/2017

Abstract: Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are making a significant impact in the neuroscience field by delivering optical firing control with the precision and resolution required for investigating information processing and plasticity in biological brains. This talk presents a spike-based training approach that is realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and can be utilized to control synaptic plasticity in live neuronal circuits as well as neuromorphic circuits such as CMOS memristors. The approach is demonstrated by training a computational SNN to control the locomotion of a virtual cockroach in an unknown environment, and to stabilize the flight of the RoboBee in the presence of partially unmodeled dynamics.

Biography: Silvia Ferrari is a Professor of Mechanical and Aerospace Engineering at Cornell University. Prior to that, she was Professor of Engineering and Computer Science, and Founder and Director of the NSF Integrative Graduate Education and Research Traineeship (IGERT) and Fellowship program on Wireless Intelligent Sensor Networks (WISeNet) at Duke University. She is the Director of the Laboratory for Intelligent Systems and Controls (LISC), and her principal research interests include robust adaptive control of aircraft, learning and approximate dynamic programming, and optimal control of mobile sensor networks. She received the B.S. degree from Embry-Riddle Aeronautical University and the M.A. and Ph.D. degrees from Princeton University. She is a senior member of the IEEE, and a member of ASME, SPIE, and AIAA. She is the recipient of the ONR young investigator award (2004), the NSF CAREER award (2005), and the Presidential Early Career Award for Scientists and Engineers (PECASE) award (2006).