The Robotics Colloquium features talks by invited and local researchers on all aspects of robotics, including control, perception, machine learning, mechanical design, and interaction. The colloquium is held Fridays between 1:30-2:30pm. Special seminars outside this schedule are indicated below. Refreshments are served.
If you would like to give a talk in upcoming Robotics Colloquia, please contact Maya Cakmak. If you would like to get regular email announcements and reminders about the robotics colloquium speakers, please sign up for the Robotics@UW mailing list.
Autumn 2019 Organizers: Tapomayukh Bhattacharjee, Maya Cakmak, Dieter Fox, Siddhartha S. Srinivasa
Abstract: Everyday tasks combine discrete and geometric decision-making. The robotics, AI, and formal methods communities have concurrently explored different planning approaches, producing techniques with different capabilities and trade-offs. We identify the combinatorial and geometric challenges of planning for everyday tasks, develop a hybrid planning algorithm, and implement an extensible planning framework. In ongoing work, we are extending this task-motion framework to uncertain and open world planning.
Biography: Neil T. Dantam is an Assistant Professor of Computer Science at the Colorado School of Mines. His research focuses on robot planning and manipulation, covering task and motion planning, quaternion kinematics, discrete policies, and real-time software design. Previously, Neil was a Postdoctoral Research Associate in Computer Science at Rice University working with Prof. Lydia Kavraki and Prof. Swarat Chaudhuri. Neil received a Ph.D. in Robotics from Georgia Tech, advised by Prof. Mike Stilman, and B.S. degrees in Computer Science and Mechanical Engineering from Purdue University. He has worked at iRobot Research, MIT Lincoln Laboratory, and Raytheon. Neil received the Georgia Tech President's Fellowship, the Georgia Tech/SAIC paper award, an American Control Conference '12 presentation award, and was a Best Paper and Mike Stilman Award finalist at HUMANOIDS '14.
Abstract: There is an essential tension between model-based and the model-free approaches to robot system design. Over the years, robotics research has produced many powerful models and algorithms for robot perception, state estimation, planning, control, etc . At the same time, model-free deep learning has recently brought unprecedented success in domains such as visual perception, object manipulation, ..., where model-based approaches struggle despite decades of research. In this talk, we will look at several ideas aimed at unifying model-based and model-free approaches to robot system construction. We embed well-known robot models and algorithms -- filters, planners, controllers -- in neural networks and train the networks end-to-end from data; as a result, we (i) improve the robustness of a model-based algorithm by learning a model optimized specifically for the algorithm and (ii) improve the data efficiency of learning by incorporating an algorithm as the structure prior. Further, the uniform network representation enables us to compose multiple system modules in a convenient and scalable manner through learning.
Biography: David Hsu is a professor of computer science at the National University of Singapore (NUS) and a member of NUS Graduate School for Integrative Sciences & Engineering. He is an IEEE Fellow. His research spans robotics and AI. In recent years, he has been working on robot planning and learning under uncertainty for human-centered robots. He received BSc in Computer Science & Mathematics from the University of British Columbia and PhD in computer science from Stanford University. At NUS, he co-founded NUS Advanced Robotics Center and has since been serving as the Deputy Director. He held visiting positions at MIT Aeronautics & Astronautics Department and CMU Robotics Institute. He has chaired or co-chaired several major international robotics conferences, including International Workshop on the Algorithmic Foundation of Robotics (WAFR) 2004 and 2010, Robotics: Science & Systems (RSS) 2015, and IEEE International Conference on Robotics & Automation (ICRA) 2016.
Biography: Bill Townsend is President & CEO of Barrett Technology, which he founded in 1988 to advance the state of human-machine interaction. He has a dozen issued patents in the US, Europe, and Japan and won the prestigious Robotic Industries Association Joseph Engelberger Award in Technology for his 1987 design of the first haptics-capable robot. This device (the “WAM® arm) was also chosen as the world’s most advanced robot in the Millennium Edition of the Guinness Book of Records. He earned his PhD and MS degrees in engineering at the Massachusetts Institute of Technology, Artificial Intelligence Laboratory (now CSAIL) and a BS in mechanical engineering at Northeastern University.
Abstract: Dieter Fox will provide a short introduction to the work going on at the NVIDIA robotics lab, followed by talk on specific project. In this first part, we'll cover three projects. Clemens Eppner: A Billion Ways to Grasp: An Evaluation of Grasp Sampling Schemes on a Dense, Physics-based Grasp Data Set With the increasing speed and quality of physics simulations, generating large-scale grasping data sets that feed learning algorithms is becoming more and more popular. An often overlooked question is how to generate the grasps that make up these data sets. We review, classify, and compare different grasp sampling strategies based on a fine-grained discretization of SE(3) and physics-based simulation of the corresponding parallel-jaw grasps. Yu Xiang: PoseRBPF: A Rao-Blackwellized Particle Filter for 6D Object Pose Tracking Tracking 6D poses of objects from videos provides rich information to a robot in performing manipulation tasks. In this work, we formulate the 6D object pose tracking problem in the Rao-Blackwellized particle filtering framework, where the 3D rotation and the 3D translation of an object are decoupled. This factorization allows our approach, called PoseRBPF to efficiently estimate the 3D translation of an object along with the full distribution over the 3D rotation. This is achieved by discretizing the rotation space in a fine-grained manner, and training an auto-encoder network to construct a codebook of feature embeddings for the discretized rotations. As a result, PoseRBPF can track objects with arbitrary symmetries while still maintaining adequate posterior distributions. Our approach achieves state-of-the-art results on two 6D pose estimation benchmarks. Ankur Hand and Karl Van Wyk: DexPilot: Depth-Based Teleoperation of Dexterous Robotic Hand-Arm System Teleoperation imbues lifeless robotic systems with sophisticated reasoning skills, intuition, and creativity. However, current teleoperation solutions for high degree-of-actuation (DoA), multi-fingered robots are generally cost-prohibitive, while low-cost offerings usually offer reduced degrees of control. Herein, a low-cost, depth-based teleoperation system, DexPilot, is developed that allows for complete control over the full 23 DoA robotic system by merely observing the bare human hand. DexPilot enabled operators to solve a variety of complex manipulation tasks that go beyond simple pick-and-place operations and performance was measured through speed and reliability metrics. It cost-effectively enables the production of high dimensional, multi-modality, state-action data that can be leveraged in the future to learn sensorimotor policies for challenging manipulation tasks.
Abstract: In this second part, we'll cover two projects. Arsalan Mousavian: 6DoF GraspNet: Variational Grasp Generation for Object Manipulation Generating grasp poses is a crucial component for any robot object manipulation task. In this talk, I will present our latest work on grasping unknown objects from the point cloud. We formulate the problem of grasp generation as sampling a set of grasps using a variational autoencoder and assess and refine the sampled grasps using a grasp evaluator model. Both Grasp Sampler and Grasp Refinement networks take 3D point clouds observed by a depth camera as input. Our model is trained purely in simulation and works in the real world without any extra steps. Extensions of the work on other manipulation tasks will briefly discussed. Nathan Ratliff: Riemannian Motion Policies for Fast and Reactive Motion Generation In the modern era of collaborative robots fast reactions and adaptation to the uncertainties of human interaction is critical. I'll present our framework for quickly generating adaptive collision free behavior using what we call Riemannian Motion Policies (RMPs). Rather than relying on computationally intensive search processes to generate behaviors, policies are instead encoded compactly as part of the geometry of the curved space so that they naturally arise as geodesics (generalized straight lines). For instance, RMPs encode obstacle avoidance by modeling how obstacles warp their surroundings resulting in massive speedups over standard search- or optimization-based planning. I'll present the framework and demonstrate a number of real world deployments on a variety of manipulation platforms. I'll also present some recent work on incorporating the mathematical structure of RMPs into robot learning techniques to encourage generalization.