Spring 2015 Colloquium

Organizers: Connor Schenck, Maya Cakmak, Dieter Fox

Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor
Neil Lebeck and Natalie Brace (UW CSE) 04/17/2015

Abstract: This article provides a tutorial introduction to modeling, estimation, and control for multirotor aerial vehicles that includes the common four-rotor or quadrotor case. Aerial robotics is a fast-growing field of robotics and multirotor aircraft, such as the quadrotor, are rapidly growing in popularity. In fact, quadrotor aerial robotic vehicles have become a standard platform for robotics research worldwide. They already have sufficient payload and flight endurance to support a number of indoor and outdoor applications, and the improvements of battery and other technology is rapidly increasing the scope for commercial opportunities. They are highly maneuverable and enable safe and low-cost experimentation in mapping, navigation, and control strategies for robots that move in three-dimensional (3-D) space. This ability to move in 3-D space brings new research challenges compared with the wheeled mobile robots that have driven mobile robotics research over the last decade. Small quadrotors have been demonstrated for exploring and mapping 3-D environments; transporting, manipulating, and assembling objects; and acrobatic tricks such as juggling, balancing, and flips. Additional rotors can be added, leading to generalized N-rotor vehicles, to improve payload and reliability.

LSD-SLAM: Large-Scale Direct Monocular SLAM
Peter Henry (UW CSE) 04/24/2015

Abstract: We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on sim(3), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.

Probabilistic Segmentation and Targeted Exploration of Objects in Cluttered Environments
Dan Butler (UW CSE) 05/01/2015

Abstract: Creating robots that can act autonomously in dynamic, unstructured environments requires dealing with novel objects. Thus, an off-line learning phase is not sufficient for recognizing and manipulating such objects. Rather, an autonomous robot needs to acquire knowledge through its own interaction with its environment, without using heuristics encoding human insights about the domain. Interaction also allows information that is not present in static images of a scene to be elicited. Out of a potentially large set of possible interactions, a robot must select actions that are expected to have the most informative outcomes to learn efficiently. In the proposed bottom-up, probabilistic approach, the robot achieves this goal by quantifying the expected informativeness of its own actions in information-theoretic terms. We use this approach to segment a scene into its constituent objects. We retain a probability distribution over segmentations. We show that this approach is robust in the presence of noise and uncertainty in real-world experiments. Evaluations show that the proposed information-theoretic approach allows a robot to efficiently determine the composite structure of its environment. We also show that our probabilistic model allows straightforward integration of multiple modalities, such as movement data and static scene features. Learned static scene features allow for experience from similar environments to speed up learning for new scenes.

Statistical Machine Learning for Autonomous Systems and Robots
Marc Deisenroth (Imperial College, London) 05/07/2015

Abstract: Statistical machine learning has been a promising direction in control and robotics for more than a decade since learning models and controllers from data allows us to reduce the amount of engineering knowledge that is otherwise required. In real systems, such as robots, many experiments, which are often required for machine learning and reinforcement learning methods, can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, pre-shaped policies, or the underlying dynamics. In the first part of the talk, I follow a different approach and speed up learning by efficiently extracting information from sparse data. In particular, I propose to learn a probabilistic, non-parametric Gaussian process dynamics model. By explicitly incorporating model uncertainty in long-term planning and controller learning my approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art reinforcement learning my model-based policy search method achieves an unprecedented speed of learning. I demonstrate its applicability to autonomous learning from scratch in real robot and control tasks. In the second part of my talk, I will discuss an alternative method for learning controllers for bipedal locomotion based on Bayesian Optimization, where it is hard to learn models of the underlying dynamics due to ground contacts. Using Bayesian optimization, we sidestep this modeling issue and directly optimize the controller parameters without the need of modeling the robot's dynamics. In the third part of my talk, I will discuss state estimation in dynamical systems (filtering and smoothing) from a machine learning perspective. I will present a unifying view on Bayesian latent-state estimation, which allows both to re-derive common filters (e.g., the Kalman filter) and devise novel smoothing algorithms in dynamical systems. I will demonstrate the applicability of this approach to intention inference in robot table tennis.

Biography: Dr Marc Deisenroth is an Imperial College Junior Research Fellow and head of the Statistical Machine Learning Group in the Department of Computing at Imperial College London (UK). From December 2011 to August 2013 he was a Senior Research Scientist at TU Darmstadt (Germany). From February 2010 to December 2011, he was a full-time Research Associate at the University of Washington (Seattle). He completed his PhD at the Karlsruhe Institute for Technology (Germany). Marc conducted his PhD research at the Max Planck Institute for Biological Cybernetics (2006-2007) and at the University of Cambridge (2007-2009). Marc was Program Chair of the "European Workshop on Reinforcement Learning" (EWRL) in 2012 and Workshops Chair of "Robotics: Science & Systems" (RSS) in 2013. His interdisciplinary research expertise centers around machine learning, control, robotics, and signal processing.

Reinforcement Learning in Robotics: A Survey
Arunkumar Byravan and Kendall Lowrey (UW CSE) 05/15/2015

Abstract: Reinforcement learning offers a framework and set of tools for the design of sophisticated and hard-to-engineer behaviours. In the general reinforcement learning setting, an agent tries to autonomously discover an optimal behaviour through trial-and-error interactions with its environment. Instead of explicitly detailing the solution to a problem, in reinforcement learning the designer of a control task provides feedback in terms of a scalar objective function that measures the one-step performance of the agent. Robotics as a reinforcement learning domain differs considerably from most well-studied reinforcement learning benchmark problems. Problems in robotics are often high-dimensional, have continuous state and action spaces and partially-observable states. Additionally, experience on robots is tedious to obtain, expensive and often hard to reproduce. In spite of these difficulties, there have been many successful applications of reinforcement learning to robotics. In this talk, we attempt to give a high-level overview of reinforcement learning for robotics. In the first part of the talk, we go over the formulation of the reinforcement learning problem and its inherent challenges as compared to other machine learning problems. We discuss two approaches to solving the reinforcement learning problem (value function methods and policy search) and their applicability to the robotics domain. In the second part of the talk, we discuss a few recent applications of reinforcement learning in learning robot tasks. We conclude by highlighting some open questions and practical difficulties in applying reinforcement learning to robotics.

ICRA 2015 Practice Talk: RoboFlow: A Flow-based Visual Programming Language for Mobile Manipulation Tasks
Sonya Alexandrova, Zach Tatlock, Maya Cakmak 05/22/2015

Abstract: General-purpose robots can perform a range of useful tasks in human environments; however, programming them to robustly function in all possible environments that they might encounter is unfeasible. Instead, our research aims to develop robots that can be programmed by its end-users in their context of use, so that the robot needs to robustly function in only one particular environment. This requires intuitive ways in which end-users can program their robot. To that end, this paper contributes a flow-based visual programming language, called RoboFlow, that allows programming of generalizable mobile manipulation tasks. RoboFlow is designed to (i) ensure a robust low-level implementation of program procedures on a mobile manipulator, and (ii) restrict the high-level programming as much as possible to avoid user errors while enabling expressive programs that involve branching, looping, and nesting. We present an implementation of RoboFlow on a PR2 mobile manipulator and demonstrate the generalizability and error handling properties of RoboFlow programs on everyday mobile manipulation tasks in human environments.

ICRA 2015 Practice Talk: Robot Programming by Demonstration with Situated Spatial Language Understanding
Maxwell Forbes, Rajesh Rao, Luke Zettlemoyer, Maya Cakmak 05/22/2015

Abstract: Robot Programming by Demonstration (PbD) allows users to program a robot by demonstrating the desired behavior. Providing these demonstrations typically involves moving the robot through a sequence of states, often by physically manipulating it. This requires users to be co-located with the robot and have the physical ability to manipulate it. In this paper, we present a natural language based interface for PbD that removes these requirements and enables hands-free programming. We focus on programming object manipulation actions—our key insight is that such actions can be decomposed into known types of manipulator movements that are naturally described using spatial language; e.g., object reference expressions and prepositions. Our method takes a natural language command and the current world state to infer the intended movement command and its parametrization. We implement this method on a two-armed mobile manipulator and demonstrate the different types of manipulation actions that can be programmed with it. We compare it to a kinesthetic PbD interface and we demonstrate our method’s ability to deal with incomplete language.

ICRA 2015 Practice Talk: Semi-autonomous Simulated Brain Tumor Ablation with RavenII Surgical Robot using Behavior Tree
Danying Hu, Yuanzheng Gong, Blake Hannaford, Eric J. Seibel 05/22/2015

Abstract: Medical robots have been widely used to assist surgeons to carry out dexterous surgical tasks via various ways. Most of the tasks require surgeon’s operation directly or indirectly. Certain level of autonomy in robotic surgery could not only free the surgeon from some tedious repetitive tasks, but also utilize the advantages of robot: high dexterity and accuracy. This paper presents a semi-autonomous neurosurgical procedure of brain tumor ablation using RAVEN Surgical Robot and stereo visual feedback. By integrating with the behavior tree framework, the whole surgical task is modeled flexibly and intelligently as nodes and leaves of a behavior tree. This paper provides three contributions mainly: (1) describing the brain tumor ablation as an ideal candidate for autonomous robotic surgery, (2) modeling and implementing the semi-autonomous surgical task using behavior tree framework, and (3) designing an experimental simulated ablation task for feasibility study and robot performance analysis.

ICRA 2015 Practice Talk: Sensor-Aided Teleoperated Grasp of Transparent Objects
Kevin Huang, Liang-Ting Jiang, Joshua R. Smith, Howard Jay Chizeck 05/22/2015

Abstract: This paper presents a method of augmenting streaming point cloud data with pretouch proximity sensor information for the purposes of teleoperated grasping of transparent targets. When using commercial RGB-Depth (RGB-D) cameras, material properties can significantly affect depth measurements. In particular, transparent objects are difficult to perceive with RGB images and commercially available depth sensors. Geometric information of such objects needs to be gathered with additional sensors, and in many scenarios, it is of interest to gather this information without physical contact. In this work, a non-contact pretouch sensor fixed to the robot end effector is used to sense and explore physical geometries previously unobserved. Thus, the point cloud representation of an unknown, transparent grasp target, can be enhanced through telerobotic exploration in real-time. Furthermore, real-time haptic rendering algorithms and haptic virtual fixtures used in combination with the augmented streaming point clouds assist the teleoperator in collision avoidance during exploration. Theoretical analyses are performed to design virtual fixtures suitable for pretouch sensing, and experiments show the effectiveness of this method to gather geometry data without collision and eventually to successfully grasp a transparent object.

ICRA 2015 Practice Talk: Depth-Based Tracking with Physical Constraints for Robot Manipulation
Tanner Schmidt, Katharina Hertkorn, Richard Newcombe, Zoltan Marton, Michael Suppa, Dieter Fox 05/22/2015

Abstract: This work integrates visual and physical constraints to perform real-time depth-only tracking of articulated objects, with a focus on tracking a robot’s manipulators and manipulation targets in realistic scenarios. As such, we extend DART, an existing visual articulated object tracker, to additionally avoid interpenetration of multiple interacting objects, and to make use of contact information collected via torque sensors or touch sensors. To achieve greater stability, the tracker uses a switching model to detect when an object is stationary relative to the table or relative to the palm and then uses information from multiple frames to converge to an accurate and stable estimate. Deviation from stable states is detected in order to remain robust to failed grasps and dropped objects. The tracker is integrated into a shared autonomy system in which it provides state estimates used by a grasp planner and the controller of two anthropomorphic hands. We demonstrate the advantages and performance of the tracking system in simulation and on a real robot. Qualitative results are also provided for a number of challenging manipulations that are made possible by the speed, accuracy, and stability of the tracking system.

ICRA 2015 Practice Talk: Efficient Leader Selection for Translation and Scale of a Bearing-Compass Formation
Eric Schoof, Airlie Chapman, Mehran Mesbahi 05/22/2015

Abstract: The paper considers the efficient selection of leader agents in a swarm running a distributed bearing-compass formation controller. The leaders apply external control which induces translation and scaling of the formation, providing manipulation methods useful to a human operator. The selection algorithm for maximizing translation and scale draws from modularity and submodularity theory. Consequently, the algorithms exhibit guaranteed optimal and suboptimal performance, respectively. For more restricted human-swarm interaction requiring pure translation and scale, a relaxed integer programming algorithm is described to reduce the combinatorial optimization problem to a computationally tractable semidefinite program. The leader selection strategies are supported through demonstration on a swarm testbed.

A Strictly Convex Hull for Computing Proximity Distances With Continuous Gradients
Jim Youngquist (UW CSE) 06/05/2015

Abstract: We propose a new bounding volume that achieves a tunable strict convexity of a given convex hull. This geometric operator is named sphere-tori-patches bounding volume (STP-BV), which is the acronym for the bounding volume made of patches of spheres and tori. The strict convexity of STP-BV guarantees a unique pair of witness points and at least C1 continuity of the distance function resulting from a proximity query with another convex shape. Subsequently, the gradient of the distance function is continuous. This is useful for integrating distance as a constraint in robotic motion planners or controllers using smooth optimization techniques. For the sake of completeness, we compare performance in smooth and nonsmooth optimization with examples of growing complexity when involving distance queries between pairs of convex shapes.