Spring 2018 Colloquium

Organizers: Dieter Fox, Maya Cakmak, Siddhartha S. Srinivasa

Everyday Activity Science and Engineering (EASE)
Michael Beetz (University Bremen, IAI) 03/23/2018

Abstract: Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing. We propose Everyday Activity Science and Engineering (EASE), a fundamental research endeavour to investigate the cognitive information processing principles employed by humans to master everyday activities and to transfer the obtained insights to models for autonomous control of robotic agents. The aim of EASE is to boost the robustness, efficiency, and flexibility of various information processing subtasks necessary to master everyday activities by uncovering and exploiting the structures within these tasks. Everyday activities are by definition mundane, mostly stereotypical, and performed regularly. The core research hypothesis of EASE is that robots can achieve mastery by exploiting the nature of everyday activities. We intend to investigate this hypothesis by focusing on two core principles: The first principle is narrative-enabled episodic memories (NEEMs), which are data structures that enable robotic agents to draw knowledge from a large body of observations, experiences, or descriptions of activities. The NEEMs are used to find representations that can exploit the structure of activities by transferring tasks into problem spaces that are computationally easier to handle than the original spaces. These representations are termed pragmatic everyday activity manifolds (PEAMs), analogous to the concept of manifolds as low-dimensional local representations in mathematics. The exploitation of PEAMs should enable agents to achieve the desired task performance while preserving computational feasibility. The vision behind EASE is a cognition-enabled robot capable of performing human-scale everyday manipulation tasks in the open world based on high-level instructions and mastering them.

Biography: Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). IAI investigates AI- based control methods for robotic agents, with a focus on human-scale everyday manipulation tasks. With his openEASE, a web-based knowledge service providing robot and human activity data, Michael Beetz aims at improving interoperability in robotics and lowering the barriers for robot programming. Due to this the IAI group provides most of its results as open-source software, primarily in the ROS software library. Michael Beetz received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996 and his Venia Legendi from the University of Bonn in 2000. Michael Beetz is currently the Coordinator of the collaborative research center EASE – Every-day Activity Science and Engineering and was a member of the steering committee of the European network of excellence in AI planning (PLANET) and coordinating the research area “robot planning''. He is associate editor of the AI Journal. His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognitive perception.

Building a Force-Controlled Actuator (Company)
David Rollinson (Hebi Robotics) 04/06/2018

Abstract: In 2014, I became one of 5 people to found HEBI Robotics, with the dream of eventually making the task of building custom robots as easy as building with Lego. A few years later we are now 9 people, and our first product, a series of modular force-controlled actuators, is rapidly being adopted for research and development. In this talk I will discuss the technical aspects of developing force-controlled actuators and the software tools for controlling them, why we chose series-elastic actuation, and various challenges that we encountered during development. I will also talk about what it’s like to be an engineer who is increasingly involved with the business aspects of a growing startup.

Biography: Dave Rollinson is a co-founder and mechanical/controls engineer at HEBI Robotics. He lives and works in Pittsburgh, PA. He received a PhD in Robotics from Carnegie Mellon University in 2014, as well as a B.S. in Mechanical Engineering in 2006, also from Carnegie Mellon University. His thesis research focused on the control and design of modular snake robots with focus towards real-world applications like urban search and rescue and industrial inspection. From 2006 to 2009, he worked as a robotics engineer for RedZone Robotics, designing, building, and deploying systems to inspect large diameter sewers in the U.S., Canada, and Singapore. In 2006, he did a solo bicycle trip across the continental United States.

Toward Human Interaction with Bio-Inspired Robot Swarms
Michael A. Goodrich (Brigham Young University) 04/13/2018

Abstract: Bio-inspired robot swarms are being designed and studied for many problems including search, pollution monitoring and control, and security. These swarms have some important advantages compared to traditional multi-agent AI approaches, including: resilience to robot attrition, robustness to communication failures, ability to explore multiple solutions to a single problem, and ability to appropriately (re)distribute resources when problems arise. These advantages come from how decentralized computation and sensing of the robots lead to robust emergent collective behaviors A fundamental challenge is figuring out how to allow humans to influence and manage swarms without imposing the human as a single point of failure, defeating the advantage of decentralized/emergent behaviors. In this talk, I will discuss our approach to enable a human to manage and influence swarms.

Biography: Mike Goodrich is a professor and the chair of the Computer Science Department at Brigham Young University. He's published a lot peer-reviewed papers in a lot of areas including human-robot interaction, decision theory, artificial intelligence, intelligent vehicles, and multi-agent systems;. He's grateful to have received funding for students and research from ONR, ARL, NASA, NSF, DARPA, Honda, INL, and Nissan Motor Company. He helped create and organize the ACM/IEEE International Conference on Human-Robot Interaction, and the open-source Journal of Human-Robot Interaction. He likes to run to blow off steam and to enable him to eat high calorie peanut M&Ms.

What Matters for Deformable Object Manipulation
Dmitry Berenson (University of Michigan) 04/20/2018

Abstract: Deformable objects such as cables and clothes are ubiquitous in factories, hospitals, and homes. While a great deal of work has investigated the manipulation of rigid objects in these settings, manipulation of deformable objects remains under-explored. The problem is indeed challenging, as these objects are not straightforward to model and have infinite-dimensional configuration spaces, making it difficult to apply established approaches for motion planning and control. One of the key challenges in manipulating deformable objects is selecting a model which is efficient to use in a control loop, especially when an accurate model is not available. Our approach to control uses a set of simple models of the object, determining which model to use at the current time step via a novel Multi-Armed Bandit algorithm that reasons over estimates of model utility. I will also present our work on interleaving planning and control for deformable object manipulation in cluttered environments, again without an accurate model of the object. Our method predicts when a controller will be trapped (e.g., by obstacles) and invokes a planner to bring the object near its goal. The key to making the planning tractable is to avoid simulating the motion of the object, instead only forward-propagating the constraint on overstretching. This approach takes advantage of the object’s compliance, which allows it to conform to the environment as long as stretching constraints are satisfied. Our method is able to quickly plan paths in environments with complex obstacle arrangements and then switch to the controller to achieve a desired object configuration.

Biography: Dmitry Berenson received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He completed a post-doc at UC Berkeley in 2012 and was an Assistant Professor at WPI 2012-2016. He started as an Assistant Professor in the EECS Department and Robotics Institute at the University of Michigan in 2016. He has received the IEEE RAS Early Career award and the NSF CAREER award.

The Dexterity Network: Deep Learning to Plan Robust Robot Grasps using Datasets of Synthetic Point Clouds, Analytic Grasp Metrics, and 3D Object Models
Jeff Mahler (University of California Berkeley) 04/25/2018

Abstract: Reliable robot grasping across a wide variety of objects is challenging due to imprecision in sensing, which leads to uncertainty about properties such as object shape, pose, mass, and friction. Recent results suggest that deep learning from millions of labeled grasps and images can be used to rapidly plan successful grasps across a diverse set of objects without explicit inference of physical properties, but training typically requires tedious hand-labeling or months of execution time. In this talk I present the Dexterity Network (Dex-Net), a framework to automatically synthesize training datasets containing millions of point clouds and robot grasps labeled with robustness to perturbations by analyzing contact models across thousands of 3D object CAD models. I will describe generative models for datasets of both parallel-jaw and suction-cup grasps. Experiments suggest that Convolutional Neural Networks trained from scratch on Dex-Net datasets can be used to plan grasps for novel objects in clutter with high precision on a physical robot.

Biography: Jeff Mahler is a Ph.D. student at the University of California at Berkeley advised by Prof. Ken Goldberg and a member of the the AUTOLAB and Berkeley Artificial Intelligence Research Lab. His current research is on the Dexterity Network (Dex-Net), a project that aims to train robot grasping policies from massive synthetic datasets of labeled point clouds and grasps generated using stochastic contact analysis across thousands of 3D object CAD models. He has also studied deep learning from demonstration and control for surgical robots. He received the National Defense Science and Engineering Fellowship in 2015 and cofounded the 3D scanning startup Lynx Laboratories in 2012 as an undergraduate at the University of Texas at Austin, which was acquired by Occipital in 2015.

Towards a Generative Model of Natural Motion
Karen Liu (Georgia Tech) 04/27/2018

Abstract: Animals adapt their movements to interact with the world in a way natural to their anatomical structures. Their motor patterns are efficient, robust, and nearly universal across individuals in the same species. My research aims to understand the dynamics and control of animal natural movements through recreating them with minimal engineering effort. Further, I seek to develop effective techniques to transfer these generative models from physical simulation to the real-world robots. In this talk, I will discuss our recent endeavors to advance in these two research areas. To date, lifelike natural motions are typically generated by highly-engineered techniques that demand high-quality motion examples. On the other hand, the minimalist approach, such as deep reinforcement learning, requires little engineering effort but is not able to generate realistic natural motion. In contrast, we show that natural legged locomotion can emerge from two simple and well-known biomechanics principles: minimal-energy and gait symmetry, without using motion examples or complicating the reward function with morphology-specific information. The second topic in this talk focuses on the problem of sim-to-real transfer. Theoretically the ability to simulate an infinite number of scenarios, actions, and physical designs should provide a compelling environment for developing effective real-world control policies for motion. Practically, the “reality gap” between virtual simulations and the physical world renders control policies developed primarily from simulations ineffective in real life scenarios. One of our innovations to bridge the gap is a customizable contact model that incorporates the analytical solution with empirical data collected for a particular scenario, such that the simulated results better match the observed phenomenon.

Biography: C. Karen Liu is an associate professor in School of Interactive Computing at Georgia Tech. She received her Ph.D. degree in Computer Science from the University of Washington. Liu's research interests are in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. She developed computational approaches to modeling realistic and natural animal movements, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms. The algorithms and software developed in her lab have fostered interdisciplinary collaboration with researchers in robotics, computer graphics, mechanical engineering, biomechanics, neuroscience, and biology. Liu received a National Science Foundation CAREER Award, an Alfred P. Sloan Fellowship, and was named Young Innovators Under 35 by Technology Review. In 2012, Liu received the ACM SIGGRAPH Significant New Researcher Award for her contribution in the field of computer graphics.

Recent Advances in Representation Learning for Dynamical Systems
Jung-Su Ha (KAIST) 05/04/2018

Abstract: In most cases, we are unable to know the exact information about the system of interest, and the amount of information needed to identify the system is also limited in quantity and quality; only raw data, which might be high dimensional, is available. The dynamics of the sequential raw data can be efficiently represented and understood by learning a latent variable model, where the observed raw data is assumed to emerge from a low-dimensional latent dynamical system. With the recent advances in deep learning, there have been many attempts utilizing deep neural networks to construct latent variable models. In this talk, I will introduce and discuss some recent advances in representation learning approach for dynamical systems. The first part of the talk will deal with the idea of amortized inference method, especially Variational Autoencoder (VAE) and Importance Weighted Autoencoder (IWAE). In the second part of the talk, I will introduce some extensions of VAEs and IWAEs to the dynamical systems based on the popular inference techniques such as Kalman filtering/smoothing or particle filtering, e.g., Deep Kalman Smoother (DKS), Kalman VAE (KVAE), Filtering Variational Objectives (FIVOs), Auto-Encoding Sequential Monte-Carlo (AESMC), and Variational Sequential Monte-Carlo (VSMC). Finally, I will present our work on a new type of representation learning approach for dynamical systems based on optimal control methods. The proposed method, named Adaptive Path-Integral Autoencoder (APIAE), takes advantage of the duality between control and inference to approximately solve the intractable inference problem using the path integral control approach, and thus can be naturally applied to solve high-dimensional motion planning problems.

Biography: Jung-Su Ha is a postdoctoral researcher in the department of Aerospace Engineering at KAIST (Korea Advanced Institute of Science and Technology). He received his M.S. degree in Electrical Engineering from KAIST in 2013, and his Ph.D. degree in Aerospace Engineering from KAIST in 2018. His research interests include developing efficient algorithms for high dimensional robotic motion planning and control problems based on the optimal control and machine learning methods. He has actively presented his works at a lot of conferences and workshops in the areas of control theory, robotics, and machine learning, such as CDC, ICRA, NIPS, ICLR, etc.

Designing Robots for Fluent Collaboration and Companionship
Guy Hoffman (Cornell University) 05/11/2018

Abstract: Designing robots for human interaction is a multifaceted challenge involving the robot's intelligent behavior, physical form, mechanical structure, and interaction aspects. In our lab, we develop and study interactive robotic systems, combining methods from AI, Mechanical and User-Centered Design, and Human-Computer Interaction. First, I will present AI systems to support human-robot fluency, including computational cognitive architectures rooted in timing, joint action, and embodied cognition. These systems led to the development of an interactive robotic improvisation system that uses embodied gestures for simultaneous, yet responsive, joint musicianship. We are now investigating how these methods can be used for a wearable robotic arm. When it comes to the robot's physical form, I draw on the fact that the expressive movement of the robot is at the core of its function, and argue for a movement-centric design approach. The robot’s movement is not added on after the robot is designed, but factored in from the onset and converses with both the visual and the pragmatic requirements of the robot. The use of techniques from 3D character animation, sculpture, industrial, and interaction design, will be exemplified through the design process of five socially expressive robots, including Shimon, Travis, Kip, Vyo, and Blossom. The third pillar of our work is the experimental study of people interacting with robots. Our lab developed a series of low-cost smartphone-based robots, which we use in situations of disclosure, conflict, compliance, and joint experiences. Our studies investigate the role of movement, timing, and nonverbal behavior in the social relationship and companionship between humans and robots, in an effort to design robots that better reflect the values we aspire to.

Biography: Guy Hoffman is Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Prior to that he was Assistant Professor at IDC Herzliya and co-director of the IDC Media Innovation Lab. Hoffman holds a Ph.D from MIT in the field of human-robot interaction. He heads the Human-Robot Collaboration and Companionship (HRC2) group, studying the algorithms, interaction schema, and designs enabling close interactions between people and personal robots in the workplace and at home. Among others, Hoffman developed the world's first human-robot joint theater performance, and a real-time improvising human-robot Jazz duet. His research papers won several top academic awards, including Best Paper awards at HRI and robotics conferences in 2004, 2006, 2008, 2010, 2013, and 2015. In both 2010 and 2012, he was selected as one of Israel's most promising researchers under forty. His TEDx talk is one of the most viewed online talks on robotics, watched more than 2.9 million times. Hoffman received his M.Sc. in Computer Science from Tel Aviv University as part of the Adi Lautman interdisciplinary excellence scholarship program.

Economy of Motion
Devin Balkcom (Dartmouth College) 05/18/2018

Abstract: How can robots do the most work with the fewest resources? Computer scientists are often concerned about bounds on the minimum computational time or memory required to solve a problem. In robotics, we would like to also minimize device complexity, the time required for action, and error. This talk explores the minimum capabilities required to solve a few problems in robot motion planning, manipulation of cloth or string, and assembly.

Biography: Devin Balkcom is an Associate Professor of Computer Science at Dartmouth. He studies problems of robotic manipulation, including knot tying and laundry folding, assembly and disassembly of deployable structures, and robot motion planning and control. Balkcom was awarded an NSF CAREER grant for his early work on robotic origami folding and time-optimal motion for mobile robots.

Toward a future society with Curious Minded Machines
Soshi Iba (Honda Research Institute) 07/20/2018

Abstract: Robotics researchers at Honda Research Institute (HRI) envision a future society where human and robots can coexist and work together to empower us and provide a unique value. Our aim is to endow robots with intelligent and cooperative behavior that will allow them to learn, reason, and be proactive, in response to complex goals in challenging real-world environments. We believe that one of the important keys to developing such intelligent systems is Curiosity. It is critically linked to information-seeking, decision-making, and intrinsically motivated learning. However, the biological function and mechanisms regulating human curiosity are not widely understood. In this talk, we explore our research activities on humanoid robots with intelligent and cooperative behavior, then present vision on a new research initiative that we call the Curious Minded Machine – a robot or intelligent system that learns continuously in a human-like, curiosity-driven way.

Biography: Soshi Iba joined Honda Research Institute USA (HRI-US) in Mountain View, CA, as a Principal Scientist in 2017, leading the robotics group in HRI-US. Prior to HRI-US, he was a Chief Engineer at Honda R&D Fundamental Technology Research Center in Saitama, Japan with research emphasis in humanoid robot and human robot interaction. He completed his Ph.D. in Robotics at Carnegie Mellon University in 2004. He received M.S. degree in 1996 and B.S. degree in 1995 in electrical and computer engineering from Carnegie Mellon University. During 1999 he was a visiting research scholar at University of Tokyo. His research interests include stochastic modeling, human robot interaction and robot navigation.

Learning with Clusters: A cardinal machine learning sin and how to correct for it
Matt Barnes (Carnegie Mellon University) 08/06/2018

Abstract: As machine learning systems become increasingly complex, clustering has evolved from an exploratory data analysis tool into an integrated component of computer vision, robotics, medical and census data pipelines. Currently, as with many machine learning systems, the output of the clustering algorithm is taken as ground truth at the next pipeline step. We show this false assumption causes subtle and dangerous behavior for even the simplest systems -- sometimes biasing results by upwards of 25%. We provide the first empirical and theoretical study of this phenomenon which we term dependency leakage. Further, we introduce fixes in the form of estimators and methods to both quantify and correct for clustering errors' impacts on downstream learners. Our work is agnostic to the downstream learners, and requires few assumptions on the clustering algorithm. Empirical results demonstrate our approach improves these machine learning systems compared to naive approaches, which do not account for clustering errors. Along these lines, we also develop several new clustering algorithms and prove bounds for existing methods. Not surprisingly, we find learning on clusters of data is easier as the number of clustering errors decreases. Thus, our work is two-fold. We attempt to both provide the best clustering possible and to learn on inevitably noisy clusters.

Biography: Matt received his MS in Robotics and is currently pursuing his PhD at Carnegie Mellon University, where he studies foundational machine learning, including clustering and detecting bias in groups of data. The primary application of his work is accurately finding cases of human trafficking from billions of online escort advertisements and is generally interested in theoretical and applied machine learning problems with meaningful real world benefit. He received his BS in mechanical engineering from Penn State University where he researched robotics for wheelchair users and energy modeling for fuel-efficient transportation. He has previously worked at Argonne National Laboratory and Uber Advanced Technology Group, and is an NSF graduate research fellow.