Autumn 2014 Colloquium

Organizers: Vikash Kumar, Maya Cakmak, Dieter Fox

Geometric Algorithms for Computing Frictionally Contacting Systems
Danny Kaufman (Adobe Creative Technologies Lab, Seattle) 10/03/2014

Abstract: Algorithms to accurately capture the combined effects of dissipation and contact processes are essential for the physical modeling of many poorly understood phenomena. These extend from prosaic domestic phenomena such as the chattering of a chair dragged across the floor to emergent pattern-formation in driven granular assemblies. Yet the fundamental features of contact mechanics expose significant challenges to computation including strong nonlinearity, nonsmoothness, nonconvexity, and nonuniqueness, compounded by the difficulties of scaling to the high-dimensional systems and interactive rates required by modern research, entertainment, and industrial applications. In this talk I will discuss how these fundamental challenges can be successfully addressed by geometric algorithms that respect core properties of modeled physical systems. I will explain how these critical geometric features are identified and incorporated as fundamental algorithmic building blocks so that predictive and convincing simulations follow by construction. I will present examples of how algorithms I have developed with such "baked-in" geometry have enabled the efficient and scalable computation of highly difficult and, in some cases, previously intractable simulation problems in contact modeling, animation, interactive design, and haptic rendering. Moving forward I will argue that building geometry into our computations is key for developing the next generation of physical simulation and design algorithms that are both simple, thus easing adoption and code maintenance, and yet efficiently predictive, thus producing reliable and visually compelling results.

Biography: Danny Kaufman is a research scientist at Adobe Creative Technologies Lab in Seattle. His research focuses on developing geometric algorithms and frameworks to obtain predictive, expressive, and efficient simulations of physical systems for applications in computer animation, interactive design, robotics, and computational physics. His work on physical simulation algorithms has led to ongoing collaborations with industrial partners including Weta Digital, Disney, and Thunderlily. He completed his PhD from Rutgers University in 2009, was a visiting scholar in the Imager Lab at The University of British Columbia from 2006 through 2010, and was a Postdoc in the Computer Science department at Columbia University from 2011 to 2013.

VR, the future, and you
Dubi Katz & Michael Abrash (Oculus VR) 10/10/2014

Abstract: In the surprisingly near future, VR is very likely to transform how we interact with information, computers, and each other. This talk will discuss why VR is likely to be a key part of our future, why it's different from anything that's come before, and what that implies for researchers and developers.

Biography: Over the last 30 years, Michael has worked at companies that made graphics hardware, computer-based instrumentation, and rendering software, been the GDI lead for the first couple of versions of Windows NT, worked with John Carmack on Quake, worked on Xbox and Xbox 360, written or co-written at least four software rasterizers (the last one of which, written at RAD Game Tools, turned into Intel’s late, lamented Larrabee project), and worked on VR at Valve. Along the way he wrote a bunch of magazine articles and columns for Dr. Dobb’s Journal, PC Techniques, PC Tech Journal, and Programmer’s Journal, as well as several books. He’s been lucky enough to have more opportunities to work on interesting stuff than he could ever have imagined when he almost failed sixth grade because he spent all his time reading science fiction. He thinks VR is going to be the most interesting project of all.

What happens if I push this button? Learning planning operators from experience
Kira Mourao (University of Edinburgh) 10/17/2014

Abstract: When a robot, dialog manager or other agent operates autonomously in a real-world domain, it uses a model of the dynamics of its domain to plan its actions. Typically, pre-specified domain models are used by AI planners to generate plans. However, creating these domain models is notoriously difficult. Furthermore, to be truly autonomous, agents must be able to learn their own models of world dynamics. An alternative therefore is to learn domain models from observations, either via known successful plans or through exploration of the world. This route is also challenging, as agents often do not operate in a perfect world: both actions and observations may be unreliable. In this talk I will present a method which, unlike other approaches, can learn from both observed successful plans and from action traces generated by exploration. Importantly, the method is robust in a variety of settings, able to learn useful domain models when observations are noisy and incomplete, or when action effects are noisy or non-deterministic. The approach first builds a classification model to predict the effects of actions, and then derives explicit planning operators from the classifiers. Through a range of experiments using International Planning Competition domains and a real robot domain, I will show that this approach learns accurate domain models suitable for use by standard planners. I also demonstrate that where settings are comparable, the results equal or surpass the performance of state-of-the-art methods.

Biography: Kira Mourao is a Postdoctoral Research Associate based in the Institute for Language, Cognition and Computation (ILCC) in the School of Informatics, at the University of Edinburgh. Currently she works on the EU project Xperience developing new methods for grounding action representations for robots. Her broad research interests are in both using cognitive robotics to inform theories of grounded cognition, and also in applying theories of grounded cognition to develop cognitive robots.

Hybrid Models for Dynamic and Dexterous Robots
Sam Burden (University of California, Berkeley) 10/24/2014

Abstract: To move through and interact with the world, a robot must intermittently contact its environment. When contacts are established or broken, the equations of motion change abruptly. Models of these piecewise-defined (or "hybrid") dynamics exhibit discontinuities and inconsistencies that generally limit their utility. In this talk I present techniques that exploit intrinsic properties of the mechanics of locomotion and manipulation to circumvent these pathologies. By topologically quotienting and smoothing the hybrid state space, I remove discontinuities that arise when limbs impact terrain. By restricting the class of impact restitution laws, I resolve inconsistencies that emerge when several limbs touch down nearly simultaneously (as with a quadruped's trot or hexapod's alternating-tripod). In addition to broadening the applicability of hybrid models for the development of dynamic and dexterous robots, these results provide novel mechanisms for stabilization of rhythmic behaviors and aperiodic maneuvers.

Biography: Sam Burden earned his BS with Honors in Electrical Engineering from the University of Washington in Seattle. He earned his PhD in Electrical Engineering and Computer Sciences at the University of California in Berkeley where he is currently a post-doctoral researcher. In Fall 2015, Sam will return to UW EE as an Assistant Professor. He focuses on discovering and formalizing principles that enable dynamic locomotion and dexterous manipulation in robotics, biomechanics, and human motor control. Broadly, he is interested in developing a sensorimotor control theory for neuromechanical and cyberphysical systems. In his spare time, he enjoys teaching robotics to students of all ages in K-12 classrooms, Maker Fairs, and campus events.

Human-Centered Principles and Methods for Designing Robotic Technologies
Bilge Mutlu (University of Wisconsin, Madison) 10/29/2014
Learning to Move: Machine Learning for Robotics and Animation
Sergey Levine (University of California, Berkeley) 10/31/2014

Abstract: Being able to acquire new motion skills autonomously could help robots build rich motion repertoires suitable for tackling complex, varied environments. I will discuss my work on motion skill learning for robotics, including methods for learning from demonstration and reinforcement learning. In particular, I will describe a class of "guided" policy search algorithms, which combine reinforcement learning and learning from demonstration to acquire multiple simple, trajectory-centric policies, with a supervised learning phase to obtain a single complex, high-dimensional policy that can then generalize to new situations. I will show applications of this method to simulated bipedal locomotion, as well as a range of robotic manipulation tasks, including putting together two parts of a plastic toy and screwing bottle caps onto bottles. I will also discuss how such techniques can be applied to character animation in computer graphics, and how this field can inform research in robotics.

Biography: Sergey Levine is a postdoctoral researcher working with Professor Pieter Abbeel at the University of California at Berkeley. He previously completed his PhD with Professor Vladlen Koltun at Stanford University. His research areas include robotics, reinforcement learning and optimal control, machine learning, and computer graphics. His work includes the development of new algorithms for learning motor skills, methods for learning behaviors from human demonstration, and applications in robotics and computer graphics, ranging from robotic manipulation to animation of martial arts and conversational hand gestures.

Coping with Uncertainty in Robotic Navigation and Manipulation
Sachin Patil (University of California, Berkeley) 11/14/2014

Abstract: A key challenge in robotics is to robustly complete navigation, exploration, and manipulation tasks when the state of the world is uncertain. This is a fundamental problem in several application areas such as logistics, personal robotics, and healthcare where robots with imprecise actuation and sensing are being deployed in unstructured environments. In such a setting, it is necessary to reason about the acquisition of perceptual knowledge and to perform information gathering actions as necessary. In this talk, I will present an approach to motion planning under motion and sensing uncertainty called "belief space" planning where the objective is to trade off exploration (gathering information) and exploitation (performing actions) in the context of performing a task. In particular, I will present how we can use trajectory optimization to compute locally-optimal solutions to a determinized version of this problem in Gaussian belief spaces. I will show that it is possible to obtain significant computational speedups without explicitly optimizing over the covariances by considering a partial collocation approach. I will also address the problem of computing such trajectories, given that measurements may not be obtained during execution due to factors such as limited field of view of sensors and occlusions. I will demonstrate this approach in the context of robotic grasping in unknown environments where the robot has to simultaneously explore the environment and grasp occluded objects whose geometry and positions are initially unknown.

Biography: Sachin Patil is a postdoctoral researcher working with Prof. Pieter Abbeel and Prof. Ken Goldberg at the University of California at Berkeley. He previously completed his PhD with Prof. Ron Alterovitz at University of North Carolina at Chapel Hill. His research focuses on developing rigorous motion planning algorithms to enable new, minimally invasive medical procedures and to facilitate reliable operation of robots in unstructured environments.

HRI Mini Symposium
HRI Mini Symposium 11/21/2014
Representing Objects in Robotics from Visual, Depth and Tactile Sensing
Marianna Madry (Royal Institute of Technology (KTH), Sweden) 12/05/2014

Abstract: Being able to localize, identify and manipulate objects is of key importance for a large range of tasks in robotics. The recent developments in depth cameras and haptic sensors have led to a wide availability and use of both visual, 3D and tactile data creating a need for defining suitable representations that enable detection, recognition and manipulation of objects. In this talk, I will discuss the desired characteristics of an object representation and the main challenges in real applications. I will start with demonstrating how simultaneous encoding of object appearance and affordance can enable transfer of grasp information to a novel object and facilitate object manipulation by a humanoid robot. Then, I will present our 3D data descriptor, the Global Structure Histogram (GSH), that by encoding global object structure of local surface properties can robustly generalize over different object poses, scales and data incompleteness outperforming the state-of-the-art global descriptors in real world conditions. Finally, I will introduce our new descriptor named Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP) that captures properties of a time series of tactile sensor measurements. ST-HMP is based on the concept of unsupervised hierarchical feature learning realized using sparse coding.

Biography: Marianna is a Ph.D. candidate at the Computer Vision and Active Perception (CVAP) Lab at the Royal Institute of Technology (KTH) in Sweden advised by Professor Danica Kragic. Her research spans the areas of robotics and computer vision. She is interested in developing a representation of household objects that serves a wide range of robotics applications, such as object detection and classification, inferring object affordances, object grasping and manipulation. Recently, she has been visiting the RSE Lab at the University of Washington in USA working with Dieter Fox and Liefeng Bo. She was also involved in the EU GRASP project directed towards the development of a cognitive robots capable of performing grasping and manipulation tasks. The project involved collaboration with the High Performance Humanoid Technologies Lab at the Karlsruhe Institute of Technology (KIT), Germany and the Vision4Robotics Lab at the Vienna University of Technology (TUW), Austria.

Structure Discovery in Robotics with Demonstrations and Active Learning
Scott Niekum (Carnegie Mellon University) 12/18/2014

Abstract: Future co-robots in the home and workplace will require the ability to quickly characterize new tasks and environments without the intervention of expert engineers. Human demonstrations and active learning can play complementary roles when learning complex, multi-step tasks in novel environments—demonstrations are a fast, natural way to broadly provide human insight into task structure and environmental dynamics, while active learning can fine-tune models by exploiting the robot’s knowledge of its own internal representations and uncertainties. Using these complementary data sources, I will focus on three types of structure discovery that can help robots quickly produce robust control strategies for novel tasks: 1) learning high-level task descriptions from unstructured demonstrations, 2) inferring physically-grounded models of task goals and environmental dynamics, and 3) interactive perception for refinement of physically-grounded models. These techniques draw from Bayesian nonparametrics, time series analysis, filtering, and control theory to characterize complex tasks like IKEA furniture assembly that challenge the state of the art in manipulation.

Biography: Scott Niekum is a postdoctoral fellow at the Carnegie Mellon Robotics Institute, working with Chris Atkeson. He received his Ph.D. in Computer Science from the University of Massachusetts Amherst in 2013 under the supervision of Andrew Barto, and his B.S from Carnegie Mellon University in 2005. His research interests include learning from demonstration, robotic manipulation, time-series analysis, and reinforcement learning.