Organizers: Connor Schenck, Maya Cakmak, Dieter Fox
Abstract: We have developed DART, a general framework for tracking articulated objects, such as human bodies, human hands, and robots, with RGB-D sensors. We took a generative model approach, where the model is an extension the recently-popular signed distance function representation to articulated objects. Articulated poses are estimated via gradient descent on an error function which combines a standard articulated ICP formulation with additional terms which penalize violation of apparent free space and model self-intersection. Importantly, all error terms are trivially parallelizable, and optimized on a GPU, allowing for real-time performance while tracking many degrees of freedom. The practical applicability of the fast and accurate tracking provided by DART has been demonstrated in a robotics application in which live estimates of robot hands and of a target object are used to plan and execute grasps.
Biography: Tanner Schmidt is a graduate student in Computer Science and Engineering at the University of Washington, working with Dieter Fox in the Robotics and State Estimation lab. His primary interests are robotics, computer vision, and artificial intelligence. He received his bachelor's degree in Electrical and Computer Engineering and Computer Science from Duke University in 2012, and began at UW in the fall of 2012.
Abstract: Although imitation learning is a powerful technique for robot learning and knowledge acquisition from na ̈ıve human users, it often suffers from the need for expensive human demonstrations. In some cases the robot has an insufficient number of useful demonstrations, while in others its learning ability is limited by the number of users it directly interacts with. We propose an approach that overcomes these short- comings by using crowdsourcing to collect a wider variety of examples from a large pool of human demonstrators online. We present a new goal-based imitation learning framework which utilizes crowdsourcing as a major source of human demonstration data. We demonstrate the effectiveness of our approach experimentally on a scenario where the robot learns to build 2D object models on a table from basic building blocks using knowledge gained from locals and online crowd workers. In addition, we show how the robot can use this knowledge to support human-robot collaboration tasks such as goal inference through object-part classification and missing-part prediction. We report results from a user study involving fourteen local demonstrators and hundreds of crowd workers on 16 different model building tasks.
Biography: Mike Chung is a third year graduate student at UW in CSE. His research interests are human-robot interaction and machine learning. His advisors are Rajesh Rao and Maya Cakmak, and he has collaborated with Dieter Fox, Su-In Lee and Jeff Bilmes.
Abstract: According to the cortical homunculus, our hand function requires over one quarter of the brain power allocated for the whole body's motor/sensory activities. The evolutionary role of the human hand is more than just being the manipulation tool that allows us to physically interact with the world. Recent study shows that our hands can also affect the mirror neuron system that enables us to cognitively learn and imitate the actions of others. However the state-of-art technologies only allow us to make cosmetically true-to-life prosthetic hands with cadaver-like stiff joints made of mechanical substitutes. And very few research group know how to design robotic hands that can closely mimic the salient biological features of the human hand. The goal of our project is to reduce cognitive and physical discrepancy, in the cases where we need a pair of our hands interacting with a different environment remotely. Our project will try to answer the following questions: With the great advance of 3D-printing technologies, and promising new materials for artificial muscles and ligaments, can we design a personalized anthropomorphic robotic hand that possesses all the favorable functions of our very own hand? With such a robotic hand, can we reduce the control space, and establish a easy mapping for the human user to effectively control it? Is it possible to teleoperate the robotic hand to perform amazingly dexterous tasks without force feedback as those surgical robots demonstrated? To answer these questions, we are going to investigate the design and control of our proposed anthropomorphic robotic hand.
Biography: Zhe (Joseph) Xu is a graduate student at University of Washington's Movement Control Laboratory lab working under the supervision of Emanuel Todorvo and Joshua Smith. His research interest is in the area of biomimetics, soft robotics, rehabilitation robotics, control systems, and robotic surgery. He hold degrees in three different fields: mechanical engineering, bioengineering, and computer science & engineering. His current research focuses on designing and analysing highly biomimetic robots with biological “soft” artificial joints through rapid prototyping technologies like 3D scanning and printing.
Abstract: Dexterous hand manipulation is one of the most complex types of biological movement, and has proven very difficult to replicate in robots. The usual approaches to robotic control – following predefined trajectories or planning online with reduced models – are both not applicable. Dexterous manipulation is so sensitive to small variations in contact forces and object location that it seems to require online planning without any simplifications. This entail searching in high dimensional spaces full of discontinuities (due to contacts and constraints) and dynamic phenomena (such as rolling, sliding and deformation). This talk will introduce ‘Dimensionality Augmentation’ as a primary tool towards synthesizing complex and expressive behaviors in high dimensional, non-smooth search spaces. Although somewhat counterintuitive, these methods involve smartly augmenting the dimensionality of an already high dimensional search space in order to make optimizers amenable to the curse of dimensionality. Optimizers make quick progress along these augmented dimensions first, and then search the neighborhood exhaustively. Unlike other methods, which hinders the dexterity by constraining search spaces, faster convergence and improved search capabilities on the full search space results in more expressive and dynamic behaviors. Dimensionality Augmentation, in association with other tools, enabled us to demonstrate for the first time online planning (i.e. model-predictive control) with a full physics model of a humanoid hand with 28 degrees of freedom and 48 pneumatic actuators. Results include full hand behaviors like prehensile and non-prehensile object manipulation and finger focused behaviors like typing. In both cases the input to the system is a high level task description, while all details of the hand movement emerge from fully automated online numerical optimization.
Biography: Vikash Kumar is a graduate student at the University of Washington's Movement Control Lab, working under the supervision of Prof. Emanuel Todorov. He previously completed a master's and undergraduate degree in Mathematics and Scientific Computing at Indian Institute of Technology, Kharagpur. His research interests lie in developing bio-mimetic systems and behavior synthesis for dexterous hand manipulation.
Abstract: General-purpose robots can perform a range of useful tasks in human environments. However, programming them requires many hours of expert work, and programming a robot to perform robustly in any possible environment is infeasible. We describe a system that allows non-expert users to program the robot for their specific environment. We show that the system, implemented for the PR2 mobile manipulator, is intuitive and can be used by users unfamiliar with robotics. We further extend the system into a visual programming language - RoboFlow - that allows looping, branching and nesting of programs. We demonstrate the generalizability and error handling properties of RoboFlow programs on everyday mobile manipulation tasks.
Biography: Sofia Alexandrova is a third-year graduate student in Computer Science and Engineering at the University of Washington. She is part of the Human-Centered Robotics lab, working with Maya Cakmak. Her main interests in robotics are human-robot interaction and programming by demonstration. She received a Masters degree in Software Engineering from St. Petersburg Academic University in 2011, and her bachelor's degree in Physics from St. Petersburg Polytechnic University in 2009.
Abstract: We present a method for automatic synthesis of interactive real-time controllers, applicable to complex three-dimensional characters. The same method is able to generate stable and realistic behaviors in a range of diverse tasks -- swimming, flying, biped and quadruped walking. It does not require motion capture or task-specific features or state machines. Instead, our method creates controllers de novo just from the physical model of the character and the definition of the control objectives. The controller is a neural network, having a large number of feed-forward units that learn elaborate state-action mappings, and a small number of recurrent units that implement memory states beyond the physical state of the character. The action generated by the network is defined as velocity. Thus the network is not learning a control policy, but rather the physics of the character under an implicit policy. Learning relies on a combination of supervised neural network training and trajectory optimization. Essential features include noise injected during training, training for unexpected changes in the task specification, and using the trajectory optimizer to obtain optimal feedback gains in addition to optimal actions. Although training is computationally-expensive and relies on cloud and GPU computing, the interactive animation can run in real-time on any processor once the network parameters are learned. This is joint work with Kendall Lowrey, Galen Andrew, Zoran Popovic, and Emanuel Todorov.
Biography: Igor Mordatch is a graduate student at the University of Washington's Graphics and Imaging Laboratory lab, working under the supervision of Emanuel Todorov and Zoran Popovic. He previously completed a master's and undergraduate degree in Computer Science and Mathematics at University of Toronto. His research interests lie in the use of physics-based methods, optimization, and machine learning techniques for graphics content creation, robotics, and biomechanics.
Abstract: We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the updated model in real time. Because we do not require a template or other prior scene model, the approach is applicable to a wide range of moving objects and scenes. In this talk I will outline how DynamicFusion works and motivate some of things we hope to do with it soon.
Biography: Richard Newcombe is a Postdoctoral research associate at the University of Washington, working on computer vision with Steve Seitz and Dieter Fox. He researched Robot Vision for his Ph.D. with Andrew Davison and Murray Shanahan at Imperial College, London, and before that he studied with Owen Holland at the University of Essex where he received his BSc. and MSc. in robotics, machine learning and embedded systems.
Abstract: End users expect appropriate robot actions, interventions, and requests for human assistance. As with most technologies, robots that behave in unexpected and inappropriate ways face misuse, abandonment, and sabotage. Complicating this challenge are human misperceptions of robot capability, intelligence, and performance. This talk will summarize work from several projects focused on this human-robot interaction challenge. Findings and examples will be shown from work on human trust in robots, deceptive robot behavior, robot motion, and robot characteristics. It is also important to examine the human-robot system, rather than just the robot. To this end, it is possible to draw lessons learned from related work in crowdsourcing (e.g., Tiramisu Transit) to help inform methods for enabling and supporting contributions by end users and local experts.
Biography: Aaron Steinfeld is an Associate Research Professor in the Robotics Institute (RI) at Carnegie Mellon University. He received his BSE, MSE, and Ph.D. degrees in Industrial and Operations Engineering from the University of Michigan and completed a Post Doc at U.C. Berkeley. He is the Co-Director of the Rehabilitation Engineering Research Center on Accessible Public Transportation (RERC-APT), Director of the DRRP on Inclusive Cloud and Web Computing, and the area lead for transportation related projects in the Quality of Life Technology Center (QoLT). His research focuses on operator assistance under constraints, i.e., how to enable timely and appropriate interaction when technology use is restricted through design, tasks, the environment, time pressures, and/or user abilities. His work includes intelligent transportation systems, crowdsourcing, human-robot interaction, rehabilitation, and universal design.
Abstract: We will present an overview of omnidirectional vision, whose major advantage over conventional systems is its wide field of view. In particular we will discuss the catadioptric systems, which are a combination of conic mirrors and conventional cameras. As any other vision system, its ultimate goal is to provide useful 3D information about the environment. In order to achieve this goal, several hierarchical steps are performed. In this talk we will cover several of this steps, from camera calibration to the two-view geometry of such systems and its combination with conventional cameras. Moreover, we will show higher level applications using this type of systems, such as robot localization, image stabilization and SLAM and their advantages over conventional systems.
Biography: Luis Puig is a post-doctoral researcher in the Department of Computer Science & Engineering at the University of Washington under the supervision of Prof. Dieter Fox. He obtained his PhD degree from the University of Zaragoza at the Robotics, Perception and Real Time group. He is interested in omnidirectional vision, visual odometry, SLAM, object recognition and Structure from Motion.