Spring 2016 Colloquium

Organizers: Justin Huang, Leah Perlmutter, Dieter Fox, Maya Cakmak

Integrated task and motion planning in belief space
Tomás Lozano-Pérez (MIT) 04/01/2016

Abstract: This talk describe an integrated strategy for planning, perception, state-estimation and action in complex mobile manipulation domains based on planning in the belief space of probability distributions over states, using hierarchical goal regression (pre-image back-chaining). We develop a vocabulary of logical expressions that describe sets of belief states, which are goals and subgoals in the planning process. We show that a relatively small set of symbolic operators can give rise to task-oriented perception in support of the manipulation goals. An implementation of this method is demonstrated in simulation and on a real PR2 robot, showing robust, flexible solution of mobile manipulation problems with multiple objects and substantial uncertainty. This is joint work with Leslie Pack Kaelbling.

Biography: Tomás Lozano-Pérez is currently the School of Engineering Professor in Teaching Excellence at MIT, where he is a member of the Computer Science and Artificial Intelligence Laboratory. His research has been in robot motion planning, computer vision, machine learning, medical imaging and computational chemistry. He received his degrees from MIT (SB 73, SM 77, PhD 80). He was a recipient of a 1985 Presidential Young Investigator Award and of the 2011 IEEE Robotics Pioneer Award. He is a Fellow of the AAAI and of the IEEE.

Recognizing Human Intent for Assistive Robotics
Henny Admoni (CMU / Yale) 04/08/2016

Abstract: Assistive robots provide direct, personal help to people to address specific human needs. The mechanisms by which assistive robots provide help can vary widely. Socially assistive robots act as tutors, coaches, or therapy aides to shape human behavior through social interaction. In contrast, physically assistive robots help people through direct manipulation of their environment. While these different types of assistance involve different robot functions, there exist underlying principles that remain constant across all assistive human-robot interactions. For example, robots must be able to recognize people’s goals and intentions in order to assist them, whether that assistance is social or physical. Identifying human intentions can be challenging, because the mapping from observed human behavior back to the underlying goals and beliefs which generated that behavior if often unclear. However, we can take advantage of findings from psychology, which show that people actually project their intentions in natural and often subconscious ways through their nonverbal behavior, such as eye gazes and gestures. In this talk, I describe how we can extract human intent from behavior so that robots can assist people in accomplishing their goals. I discuss research across the socially and physically assistive domains, from autonomous robots designed to teach and collaborate with humans on a building task, to a robot arm operated through shared control that helps people with mobility impairments manipulate their environment. Throughout the talk, I show how nonverbal behavior can be incorporated into these systems to improve their understanding of human intentions, which leads to more effective assistance.

Biography: Henny Admoni is a postdoctoral fellow at the Robotics Institute at Carnegie Mellon University, where she works on assistive robotics and human-robot interaction with Siddhartha Srinivasa in the Personal Robotics Lab. Henny develops and studies intelligent robots that improve people's lives by providing assistance through social and physical interactions. Henny completed her PhD in Computer Science at Yale University with Brian Scassellati. Her PhD dissertation was about modeling the complex dynamics of nonverbal behavior for socially assistive human-robot interaction. Henny also holds an MS in Computer Science from Yale University, and a BA/MA joint degree in Computer Science from Wesleyan University. Henny's scholarship has been recognized with awards such as the NSF Graduate Research Fellowship, the Google Anita Borg Memorial Scholarship, and the Palantir Women in Technology Scholarship.

Deep Learning for Robot Navigation and Perception
Wolfram Burgard (University of Freiburg) 04/11/2016

Abstract: Autonomous robots are faced with a series of learning problems to optimize their behavior. In this presentation I will describe recent approaches developed in my group based on deep learning architectures for object recognition and body part segmentation from RGB(-D) images. In addition, I will present a terrain classification approach that utilizes sound. For all approaches I will describe expensive experiments quantifying in which way the corresponding algorithm extends the state of the art.

Biography: Wolfram Burgard is a professor for computer science at the University of Freiburg and head of the research lab for Autonomous Intelligent Systems. His areas of interest lie in artificial intelligence and mobile robots. His research mainly focuses on the development of robust and adaptive techniques for state estimation and control. Over the past years Wolfram Burgard and his group have developed a series of innovative probabilistic techniques for robot navigation and control. They cover different aspects such as localization, map-building, SLAM, path-planning, exploration, and several other aspects. Wolfram Burgard coauthored two books and more than 300 scientific papers. In 2009, Wolfram Burgard received the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award. In 2010, Wolfram Burgard received an Advanced Grant of the European Research Council. Since 2012, Wolfram Burgard is the coordinator of the Cluster of Excellence BrainLinks-BrainTools funded by the German Research Foundation. Wolfram Burgard is Fellow of the ECCAI, the AAAI, and the IEEE.

RFID-Enhanced Robots Enable New Applications in Healthcare, Asset Tracking, and Remote Sensing
Travis Deyle (Cobalt Robotics) 04/22/2016

Abstract: Mounting long-range RFID readers to mobile robots or drones permits them to opportunistically relocate antennas to a virtually-infinite number of unique vantage points -- including hard-to-reach locations. Robotics researchers are employing mobile readers in unstructured environments (offices, homes, and outdoors) to make great strides in robotics, strides which would otherwise be extremely difficult or impossible without the use of long-range RFID tags. Examples include: Taking inventory and locating tagged assets in homes in lieu of perfect visual object recognition; fetching and retrieving tagged objects for older adults; and using drones to obtain remote sensor measurements from "sensorized" tags and performing tasks such as soil moisture sensing, remote crop monitoring, water quality monitoring, remote sensor deployment, and infrastructure monitoring of buildings, bridges, and dams.

Biography: Travis Deyle is an expert in passive UHF RFID systems and their applications in healthcare and robotics. He earned a PhD from Professor Charlie Kemp's Healthcare Robotics Lab at Georgia Tech; he worked on "cyborg dragonflies" as a Postdoc at Duke University under Professor Matt Reynolds; and he worked on new, non-public projects within Google[x] Life Sciences (now Verily Life Sciences) alongside the team that developed the glucose-sensing "smart contact lens." He is now the co-founder and CEO of a stealthy robotics startup named Cobalt Robotics.

Robots That Teach
Brian Scassellati (Yale) 05/06/2016

Abstract: Robots have long been used to provide assistance to individual users through physical interaction, typically by supporting direct physical rehabilitation or by providing a service such as retrieving items or cleaning floors. Socially assistive robotics (SAR) is a comparatively new field of robotics that focuses on developing robots capable of assisting users through social rather than physical interaction. Just as a good coach or teacher can provide motivation, guidance, and support without making physical contact with a student, socially assistive robots attempt to provide the appropriate emotional, cognitive, and social cues to encourage development, learning, or therapy for an individual. In this talk, I will review some of the reasons why physical robots rather than virtual agents are essential to this effort, highlight some of the major research issues within this area, and describe some of our recent results building supportive robots for teaching 1st graders about nutrition, helping 2nd graders struggling to learn English as a second language, coaching 3rd graders on how to deal with bullies, and practicing social skills with children with autism spectrum disorder.

Biography: Brian Scassellati is a Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University and Director of the NSF Expedition on Socially Assistive Robotics. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills. Using computational modeling and socially interactive robots, his research evaluates models of how infants acquire social skills and assists in the diagnosis and quantification of disorders of social development (such as autism).

ICRA 2016 Practice Talk: Teaching English through Conversational Robotic Agents
Leah Perlmutter, Alex Fiannaca, Sahil Anand, Lindsey Arnold, Eric Kernfeld, Kimyen Truong, Akiva Notkin, and Maya Cakmak (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Combining Model-Based Policy Search with Online Model Learning for Control of Physical Humanoids
Igor Mordatch, Nikhil Mishra, Clemens Eppner, and Pieter Abbeel (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Optimal Control with Learned Local Models: Application to Dexterous Manipulation
Vikash Kumar, Emanual Todorov, and Sergey Levine (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Monocular 3D Tracking of Deformable Surfaces
Luis Puig and Kostas Daniilidis (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: NEOL: Toward Never-Ending Object Learning for Robots
Yuyin Sun and Dieter Fox (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Design of a Highly Biomimetic Anthropomorphic Robotic Hand towards Artificial Limb Regeneration
Zhe Xu and Emanuel Todorov (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Hysteresis Model of Longitudinally Loaded Cable for Cable Driven Robots and Identification of the Parameters
Muneaki Miyasaka, Mohammad Haghighipanah, Yangming Li, and Blake Hannaford (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Dynamic Modeling of Cable Driven Elongated Surgical Instruments for Sensorless Grip Force Estimation
Yangming Li, Muneaki Miyasaka, Mohammad Haghighipanah, and Blake Hannaford (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Unscented Kalman Filter and 3D Vision to Improve Cable Driven Surgical Robot Joint Angle Estimation
Mohammad Haghighipanah, Muneaki Miyasaka, Yangming Li, and Blake Hannaford (University of Washington) 05/13/2016
ICRA 2016 Practice Talk: Making Objects Graspable in Confined Environments through Push and Pull Manipulation with a Tool
Sarah Elliott, Michelle Valente, and Maya Cakmak (University of Washington) 05/13/2016
Physics-based Manipulation
Sidd Srinivasa (CMU) 06/03/2016

Abstract: Humans effortlessly push, pull, and slide objects, fearlessly reconfiguring clutter, and using physics and the world as a helping hand. But most robots treat the world like a game of pick-up-sticks: avoiding clutter and attempting to rigidly grasp anything they want to move. I'll talk about some of our ongoing efforts at harnessing physics for nonprehensile manipulation, and the challenges of deploying our algorithms on real physical systems. I'll specifically focus on whole-arm manipulation, state estimation for contact manipulation, and on closing the feedback loop on nonprehensile manipulation.

Biography: Siddhartha Srinivasa is the Finmeccanica Associate Professor at The Robotics Institute at Carnegie Mellon University. He works on robotic manipulation, with the goal of enabling robots to perform complex manipulation tasks under uncertainty and clutter, with and around people. To this end, he founded and directs the Personal Robotics Lab, and co-directs the Manipulation Lab. He has been a PI on the Quality of Life Technologies NSF ERC, DARPA ARM-S and the CMU CHIMP team on the DARPA DRC. Sidd is also passionate about building end-to-end systems (HERB, ADA, HRP3, CHIMP, Andy, among others) that integrate perception, planning, and control in the real world. Understanding the interplay between system components has helped produce state of the art algorithms for object recognition and pose estimation (MOPED), and dense 3D modeling (CHISEL, now used by Google Project Tango). Sidd received a B.Tech in Mechanical Engineering from the Indian Institute of Technology Madras in 1999, an MS in 2001 and a PhD in 2005 from the Robotics Institute at Carnegie Mellon University. He played badminton and tennis for IIT Madras, captained the CMU squash team, and likes to run ultra marathons.

Planetary Scale Swarm Sensing, Planning and Control for Weather Prediction
Ashish Kapoor (Microsoft Research) 06/10/2016

Abstract: Weather forecasting is a canonical predictive challenge that relies on extensive data gathering operations. We explore new directions with forecasting weather as a data-intensive challenge that involves large-scale sensing of the required information via planning and control of a swarm of aerial vehicles. First, we will demonstrate how commercial aircraft can be used to sense the current weather conditions at a continental scale and help us create Bayesian deep-hybrid predictive model for weather forecasts. Beyond making predictions, these probabilistic models can provide the guidance of sensing with value-of-information analyses, where we consider uncertainties and needs of sets of routes and maximize information value in light of the costs of acquiring data from a swarm of sensors. The methods can be used to select ideal subsets of locations to sense and also to evaluate the value of trajectories of flights for sensing. Finally, we will discuss how to carry out such large sensing missions using novel algorithms for robot planning under uncertainty.

Biography: Ashish Kapoor is a senior researcher at Microsoft Research, Redmond. His recent research focuses on machine learning with applications to controls and planning of aerial vehicles. In the past he has worked in many different areas that include quantum machine learning, computer vision, affective computing and human-computer-interaction. Ashish received a PhD at the MIT Media Lab in 2006 and prior to that graduated from Indian Institute of Technology, Delhi.