Autumn 2018 Colloquium

Organizers: Tapomayukh Bhattacharjee, Maya Cakmak, Dieter Fox, Siddhartha S. Srinivasa

Drones in Public: distancing and communication with general users
Brittany Duncan (University of Nebraska-Lincoln) 09/28/2018

Abstract: This talk will focus on the role of human-robot interaction with drones in public spaces and be focused on two individual research areas: proximal interactions in shared spaces and improved communication with both end-users and bystanders. Prior work on human-interaction with aerial robots has focused on communication from the users or about the intended direction of flight, but has not considered how to distance from and communicate to novice users in unconstrained environments. In this presentation, it will be argued that the diverse users and open-ended nature of public interactions offers a rich exploration space for foundational interaction research with aerial robots. Findings will be presented from both lab-based and design studies, while context will be provided from the field-based research that is central to the NIMBUS lab. This presentation will be of interest to researchers and practitioners in the robotics community, as well as those in the fields of human factors, artificial intelligence, and the social sciences.

Biography: Dr. Brittany Duncan is an Assistant Professor in Computer Science and Engineering and a co-Director of the NIMBUS lab at the University of Nebraska, Lincoln. Her research is at the nexus of behavior-based robotics, human factors, and unmanned vehicles; specifically she is focused on how humans can more naturally interact with robots, individually or as part of ad hoc teams, in field-based domains such as agricultural, disaster response, and engineering applications. She is a PI on a NSF Early Faculty Career Award (CAREER), a co-PI on a NSF National Robotics Initiative (NRI) grant, and was awarded a NSF Graduate Research Fellowship in 2010. Dr. Duncan received a Ph.D. From Texas A&M University and B.S. in Computer Science from the Georgia Institute of Technology. For more information, please see: cse.unl.edu/~bduncan or nimbus.unl.edu.

Leveraging Proprioceptive Feedback for Mobile Manipulation
Sisir Karumanchi (NASA Jet Propulsion Laboratory) 10/05/2018

Abstract: This talk highlights proprioceptive feedback as a means to do more with less sensing, less task specification and less a priori information. The motivating application is mobile manipulation in harsh environments with field-able robots. Mainstream R&D in Robotics has focused on better representations to consolidate contextual information (deep nets, scene classifiers, world models). Such contextual understanding does lead to intelligent behaviors with better generalization. In contrast, this talk is about basic competence by way of simple behaviors (“does one thing and does it well”) and sequential composition of mixed feedback behaviors (exteroceptive interleaved with proprioceptive) that can complement each other. This talk builds on practical lessons learned from the speaker’s past experience in creating fieldable systems where one has to work with imperfect sensors, imperfect controllers, imperfect motion planners and imperfect hardware. A key lesson learned is the notion that simple behaviors generalize better in the field. This talk postulates that proprioceptive feedback is effective because it is i) ego-centric (does not rely on localization) ii) often correlated with both task performance and control inputs. Specifically, we highlight force feedback behaviors and intermediate staging behaviors (e.g. bracing with one arm and lifting with other, moving a neck/torso for better camera alignment).

Biography: Sisir is a Robotics Technologist at NASA’s Jet Propulsion Lab, Caltech. He is a member of the Manipulation and Sampling Group that focuses on adaptive sampling strategies on Rovers. He was the software lead for the JPL team at the DARPA Robotics Challenge finals. Before joining JPL, Sisir was the manipulation lead within the MIT Team for the VRC and the DRC Trials phase of the DARPA Robotics Challenge program. Team RoboSimian finished fifth out of 23 teams at the DRC finals. Team MIT finished fourth at the DRC Trials and third during the VRC phase. Sisir completed his Ph.D. with the Australian Centre for Field Robotics at the Uni- versity of Sydney in 2010. During 2011-2014, he was a postdoc at the Massachusetts Institute of Technology. During his postdoc, he worked with Dr. Karl Iagnemma on semi-autonomous control of ground vehicles and in mobile manipulation with Prof. Seth Teller and Prof. Russ Tedrake.

Long Duration Autonomy and Constraint-Based Coordination of Multi-Robot Systems
Magnus Egerstedt (Georgia Institute of Technology) 10/12/2018

Abstract: By now, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives in a distributed manner, such as assembling shapes or covering areas. But, the mapping from high-level tasks to these objectives is not particularly well understood. In this talk, we investigate this topic in the context of long duration autonomy, i.e., we consider teams of robots, deployed in an environment over a sustained period of time, that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding the tasks and safety constraints, as well as a detour into ecology as a way of understanding how persistent environmental monitoring, as a special instantiation of the long duration autonomy concept, can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth.

Biography: Dr. Magnus Egerstedt is the Steve W. Chaddick School Chair and Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. He received the M.S. degree in Engineering Physics and the Ph.D. degree in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, the B.A. degree in Philosophy from Stockholm University, and was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. Magnus Egerstedt is a Fellow of the IEEE and has received a number of teaching and research awards, including the Ragazzini Award from the American Automatic Control Council, the Outstanding Doctoral Advisor Award and the HKN Outstanding Teacher Award from Georgia Tech, and the Alumnus of the Year Award from the Royal Institute of Technology.

Building unsupervised, versatile agents with meta-learning
Chelsea Finn (Google Brain/Stanford University) 10/19/2018

Abstract: Machine learning excels primarily in settings where an engineer can first reduce the problem to a particular function, and collect a substantial amount of labeled input-output pairs for that function. In drastic contrast, humans are capable of learning a range of versatile behaviors from streams of raw sensory data with minimal external instruction. How can we develop machines that learn more like the latter? In this talk, I will discuss recent work on enabling ML systems and robots to be versatile, learning behaviors and concepts from raw pixel observations with minimal supervision. In particular, I will show how we can use meta-learning to infer the objective for a new task from only a few positive examples, how algorithms can use large unlabeled datasets to learn representations for allow efficiently learning downstream tasks, and how we can apply meta reinforcement learning on a real robot to enable online adaptation in the face of novel environments.

Virtual and Mixed Reality Interfaces for Human-Robot Interaction
Samir Gadre (Microsoft) 10/26/2018

Abstract: Virtual Reality (VR) and Mixed Reality (MR) are promising interfaces to facilitate productive human-robot interactions. We present recent VR and MR interfaces that allow users to naturally visualize and control robot motion. This talk focuses on the key technologies and architectures we use to build VR/MR interfaces, and how we use these technologies to create collaborative experiences. We discuss our application of VR/MR interfaces to active areas in robotics research such as robot programming, learning from demonstration, and symbol grounding.

Biography: Samir Gadre is a recent graduate of Brown University where he earned a B.S. in Computer Science. He completed his senior thesis, working on Mixed Reality interfaces to collect training data for learning from demonstration algorithms. Samir is interested in the intersections between computer vision, robotics, and human-robot interaction. He is passionate about the democratization of robotics.

Symbol Grounding through Behavioral Exploration and Multisensory Perception: Solutions and Open Problems
Jivko Sinapov (Tufts University) 11/02/2018

Abstract: Solving the symbol grounding problem in a robotics setting requires the robot to connect internal representations of symbolic information to real world data from its sensory experience. The problem is especially important for language learning as a robot must have the means to represent symbols such as “red”, “soft”, “bigger than”, etc. not only in terms of other symbols but also in terms of its own perception of objects for which these symbols may be true or false. In this talk, I will present a general framework for symbol grounding in which a robot connects semantic descriptors of objects and their relationships to its multisensory experience produced when interacting with objects. The framework is inspired by research in cognitive and developmental psychology that studies how behavioral object exploration in infanthood is used by humans to learn grounded representations of objects and their affordances. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a “question” to the object, which is subsequently “answered” by the sensory stimuli produced during the execution of the behavior. In the proposed framework, the robot interacts with objects using a diverse set of behaviors (e.g., grasping, lifting, looking) coupled with a variety of sensory modalities (e.g., vision, audio, haptics, etc.). I will present results from several large-scale experiments involving human-robot and robot-object interaction, which show that the framework enables robots to learn multisensory object models, as well as to ground the meaning of linguistic descriptors extracted through human-robot dialogue. For example, the word ``heavy’’ is automatically grounded in the robot’s haptic sensations when lifting an object, while the word ``red’’ is grounded in the robot’s visual input, without the need for a human expert to specify which sensory input is necessary for learning a particular word. The proposed framework is also evaluated in a service robotics object delivery task setting where the robot must efficiently identify whether a set of linguistic descriptors, e.g., “a red empty bottle”) apply to an object. Finally, I will conclude with a discussion on open problems in multisensory symbol grounding, which, if solved, could results in the large-scale deployment of such systems in real-world domains.

Biography: Jivko Sinapov received his Ph.D. in computer science and human-computer interaction from Iowa State University (ISU) in the Fall of 2013. While working toward his Ph.D. at ISU's Developmental Robotics Lab, he developed novel methods for behavioral object exploration and multi-modal perception. He went on to be a clinical assistant professor with the Texas Institute for Discovery, Education, and Science at UT Austin and a postdoctoral associate working with Peter Stone at the UTCS Artificial Intelligence lab. Jivko Sinapov joined Tufts University in the Fall of 2017 as the James Schmolze Assistant Professor in Computer Science. Jivko's research interests include cognitive and developmental robotics, computational perception, human-robot interaction, and reinforcement learning.

Towards Generalizable Autonomy in Robotics
Animesh Garg (Nvidia AI Research Lab / Stanford AI Lab) 11/09/2018

Abstract: Robotics and AI are experiencing radical growth, fueled by innovations in data-driven learning paradigms coupled with novel device design, in applications such as healthcare, manufacturing and service robotics. Data-driven methods such as reinforcement learning circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense: training these models frequently takes weeks in addition to months of task-specific data-collection on physical systems. Further such ab initio methods often do not scale to complex sequential tasks. In contrast, biological agents can often learn faster not only through self-supervision but also through imitation. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify in reinforcement learning, control theoretic planning, semantic scene & video understanding, and design. In this talk, I will discuss two aspects of Generalizable Imitation: Task Imitation, and Generalization in both Visual and Kinematic spaces. First, I will describe how we can move away from hand-designed finite state machines by unsupervised structure learning for complex multi-step sequential tasks. Then I will discuss techniques for robust policy learning to handle generalization across unseen dynamics. I will revisit task structure learning for task-level understanding generalizes across visual semantics. And lastly, I will present a method for generalization across task semantics with a single example with unseen task structure, topology or length. The algorithms and techniques introduced are applicable across domains in robotics; in this talk, I will exemplify these ideas through my work on medical and personal robotics.

Biography: Animesh Garg is a Senior Research Scientist in Nvidia AI Research Lab and a Research Scientist at Stanford AI Lab. Animesh received his Ph.D. from the University of California, Berkeley where he was a part of the Berkeley AI Research Group and spent 2 years as a Postdoctoral Researcher at Stanford AI Lab. Animesh works in the area of robot skill learning and his work sits at the interface of optimal control, machine learning, and computer vision methods for robotics applications. He has worked on data-driven Learning for autonomy and human-skill augmentation in surgical robotics and personal robots. His research has been recognized with Best Applications Paper Award at IEEE CASE, Best Video at Hamlyn Symposium on Surgical Robotics, and Best Paper Nomination at IEEE ICRA 2015. And his work has also featured in press outlets such as New York Times, UC Health, UC CITRIS News, and BBC Click.

Robots, Disney, and Touch - Can we get closer to our robots?
Günter Niemeyer (Disney Research) 11/16/2018

Abstract: Robotics obviously has a long history, including at Disney. But touch has been one of the more challenging aspects. From peg-in-hole tasks and force control, to grasping and shaking hands, enabling our robots to interact is hard but critical. We need to endow them with a better sense (and act) of touch. I would like to review some of the systems at Disney and some of the related work, both inside and outside Disney. And both in telerobotics, interacting with a human operator, and in direct interactions, with a human partner. Indeed simultaneously controlling interaction forces and motion leads to the classic stability problems and performance trade-offs. Impedance control and passivity are standard and robust tools, using minimal assumptions, but can lead to conservative solutions. And often feel roboticy. So we ask ourselves: how should we build robots, what assumptions should we make, what controls and models are appropriate, and how do we create behaviors that make robots act and feel more natural? Can we get robots ready for up-close human interactions?

Biography: Günter Niemeyer is a senior research scientist at Disney Research, Los Angeles. His research examines physical human-robotic interactions and interaction dynamics, force sensitivity and feedback, teleoperation with and without communication delays, and haptic interfaces. He received MS and PhD degrees from the Massachusetts Institute of Technology (MIT) in the areas of adaptive robot control and bilateral teleoperation, introducing the concept of wave variables. He also held a postdoctoral research position at MIT developing surgical robotics. In 1997, he joined Intuitive Surgical Inc., where he helped create the da Vinci Minimally Invasive Surgical System. He was a member of the Stanford faculty from 2001-2009, directing the Telerobotics Lab. From 2009-2012 he worked with the PR2 personal robot at Willow garage. He joined Disney Research in 2012.

Doing for our robots what evolution did for us
Leslie Kaelbling (MIT) 11/30/2018

Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot. Joint work with: Tomas Lozano-Perez, Zi Wang, Caelan Garrett and a fearless group of summer robot students.

Biography: Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founder of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot.

Search-based Planning for High-dimensional Robotic Systems Using Ensembles of Solutions to Their Low-dimensional Abstractions
Maxim Likhachev (CMU) 12/07/2018

Abstract: Search-based Planning refers to planning by constructing a graph from systematic discretization of the state- and action-space of a robot and then employing a heuristic search to find an optimal path from the start to the goal vertex in this graph. This paradigm works well for low-dimensional robotic systems such as mobile robots and provides rigorous guarantees on solution quality. However, when it comes to planning for higher-dimensional robotic systems such as mobile manipulators, humanoids and vehicles driving at high-speed, Search-based Planning has been typically thought of as infeasible. In this talk, I will describe some of the research that my group has done into changing this thinking. In particular, I will focus on our recent findings into how Search-based Planning can be made feasible when planning for high-dimensional systems based on the idea that we can construct multiple lower-dimensional abstractions of such systems, solutions to which can effectively guide the overall planning process. To this end, I will describe Multi-Heuristic A*, an algorithm recently developed by my group, some of its extensions and its applications to a variety of high-dimensional planning and complex decision-making problems in Robotics.

Biography: Maxim Likhachev is an Associate Professor at Carnegie Mellon University, directing Search-based Planning Laboratory (SBPL). His group researches heuristic search, decision-making and planning algorithms, all with applications to the control of robotic systems including unmanned ground and aerial vehicles, mobile manipulation platforms, humanoids and multi-robot systems. Maxim obtained his Ph.D. in Computer Science from Carnegie Mellon University with a thesis called “Search-based Planning for Large Dynamic Environments.” Maxim has over 120 publications in top journals and conferences on AI and Robotics and numerous awards. His work on Anytime D* algorithm, an anytime planning algorithm for dynamic environments, has been awarded the title of Influential 10-year Paper at International Conference on Automated Planning and Scheduling (ICAPS) 2017, the top venue for research on planning and scheduling. Other awards include selection for 2010 DARPA Computer Science Study Panel that recognizes promising faculty in Computer Science, Best RSS paper award, being on a team that won 2007 DARPA Urban Challenge and on a team that won the Gold Edison award in 2013, and a number of other awards.