Organizers: Dieter Fox, Maya Cakmak, Siddhartha S. Srinivasa, Kat Steele, Sam Burden
Abstract: Underwater gliders, propeller-driven submersibles, and other marine robots are increasingly being tasked with gathering information (e.g., in environmental monitoring, offshore inspection, and coastal surveillance scenarios). However, in most of these scenarios, human operators must carefully plan the mission to ensure completion of the task. Strict human oversight not only makes such deployments expensive and time consuming but also makes some tasks impossible due to the requirement for heavy cognitive loads or reliable communication between the operator and the vehicle. We can mitigate these limitations by making the robotic information gatherers semi-autonomous, where the human provides high-level input to the system and the vehicle fills in the details on how to execute the plan. These capabilities increase the tolerance for operator neglect, reduce deployment cost, and open up new domains for information gathering. In this talk, I will show how a general framework that unifies information theoretic optimization and physical motion planning makes semi-autonomous information gathering feasible in marine environments. I will leverage techniques from stochastic motion planning, adaptive decision making, and deep learning to provide scalable solutions in a diverse set of applications such as underwater inspection, ocean search, and ecological monitoring. The techniques discussed here make it possible for autonomous marine robots to “go where no one has gone before,” allowing for information gathering in environments previously outside the reach of human divers.
Biography: Geoffrey A. Hollinger is an Assistant Professor in the School of Mechanical, Industrial & Manufacturing Engineering at Oregon State University. His current research interests are in adaptive information gathering, distributed coordination, and learning for autonomous robotic systems. He has previously held research positions at the University of Southern California, Intel Research Pittsburgh, University of Pennsylvania’s GRASP Laboratory, and NASA's Marshall Space Flight Center. He received his Ph.D. (2010) and M.S. (2007) in Robotics from Carnegie Mellon University and his B.S. in General Engineering along with his B.A. in Philosophy from Swarthmore College (2005). He is a recent recipient of the 2017 Office of Naval Research Young Investigator Program (YIP) award.
Abstract: The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll start by introducing a new high speed autonomous “rally car” platform built at Georgia Tech, and discuss an off-road racing task that requires impressive sensing, speed, and agility to complete. I will discuss two approaches to this problem, one based on model predictive control and one based on learning deep policies that directly map images to actions. Along the way I’ll introduce new tools from reinforcement learning, imitation learning, and online learning and show how theoretical insights help us to overcome some of the practical challenges involved in learning on a real-world platform. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.
Biography: Byron Boots is an Assistant Professor in the College of Computing at Georgia Tech. He directs the Georgia Tech Robot Learning Lab, affiliated with the Center for Machine Learning and the Institute for Robotics and Intelligent Machines. Byron’s research focuses on development of theory and systems that tightly integrate perception, learning, and control. He received his Ph.D. in Machine Learning from Carnegie Mellon University and held a postdoctoral research position in Computer Science and Engineering at the University of Washington. His research has won several awards including Best Paper at ICML in 2010 and finalist for Best Paper at ICRA in 2017.
Abstract: For autonomous robots to act as assistants in homes, offices, and other locations of daily life, they must be able to fluently manipulate objects designed for use by humans. Multi-fingered and anthropomorphic robotic hands have been designed to be well suited for manipulating such objects. In this talk I will present results from several recent studies examining different aspects of the autonomous multi-fingered manipulation problem. The methods presented will use both machine learning and motion planning using both visual and tactile perception. I will first present two methods, one learning-based the other planning-based, for the task of in-hand manipulation, where a robot must move a grasped object to a new position using only the dexterity available in its fingers. I will then present a novel deep neural-network approach to planning multi-fingered grasps for previously unseen objects. Our novel neural-network architecture enables the robot to autonomously perform motion planning inside the neural network, as a form of probabilistic inference.
Biography: Tucker Hermans is an assistant professor in the School of Computing at the University of Utah, where he is member of the University of Utah Robotics Center. Previously, he was a postdoctoral researcher in the Intelligent Autonomous Systems lab at TU Darmstadt in Darmstadt, Germany. There he worked with Jan Peters on tactile manipulation and robot learning, while serving as the team leader at TUDa for the European Commission project TACMAN. Professor Hermans was at Georgia Tech from 2009 to 2014 in the School of Interactive Computing. There he earned his Ph.D. in Robotics under the supervision of Aaron Bobick and Jim Rehg in the Computational Perception Laboratory. His dissertation research dealt with robots learning to discover and manipulate previously unknown objects. At Georgia Tech he also earned a M.Sc. in Computer Science with specialization in Computational Perception and Robotics. He earned his A.B. in German and Computer Science from Bowdoin College in 2009.
Abstract: We seek the ultimate goal of having self-sufficient autonomous service mobile robots working in human environments, performing tasks accurately and robustly. Successfully deploying such robots requires addressing several challenges in mapping, localization, navigation, and autonomous exception recovery. The key to robust execution in all these sub-problems is to expect and anticipate changes in the environment, the deployment conditions, and algorithmic limitations. In this talk, I shall present our recent research along two broad themes: algorithms for robust navigation of long-term autonomous mobile robots, and algorithms to ensure that they remain autonomous over extended periods of time. In particular, I shall present several algorithms for long-term mapping, localization, joint perception and planning, and autonomous fault recovery. These algorithms have enabled our robots to autonomously traversal more than a thousand kilometers while performing tasks in multiple universities.
Biography: Joydeep Biswas is an Assistant Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst. He earned his Ph.D. in Robotics from Carnegie Mellon University in 2014, and prior to that a B.Tech. in Engineering Physics from the Indian Institute of Technology, Bombay in 2008. Professor Biswas' research on autonomous service mobile robots and robot soccer has been covered by several media and news articles.
Abstract: Laziness is defined as "the quality of being unwilling to work". It is a common approach used in many algorithms (and by many graduate students) where work, or computation, is delayed until absolutely necessary. In the context of motion planning, this idea has been frequently used to reduce the computational cost of testing if a robot collides with obstacles, an operation that governs the running time of many motion-planning algorithms. In this talk, I will describe and analyze several algorithms that use this simple, yet effective idea, to dramatically improve over the state-of-the-art. A by-product of lazily performing collision detection is a shift in the computational weight in motion-planning algorithms from collision detection to nearest-neighbor search or to graph search. This induces new challenges which I will also address in my talk---Can we employ application-specific nearest-neighbor data structures tailored for lazy motion-planning algorithms? Do we need to be completely lazy (with respect to collision detection) or should we balance laziness with, say graph operations? The talk is based on collaboration with Dan Halperin, Michal Kleinbort, Siddhartha Srinivasa, Aditya Vamsikrishna, Ariel Procaccia, Nika Haghtalab and Simon Mackenzie.
Biography: Oren Salzman completed his PhD in the School of Computer Science at Tel Aviv University, under the supervision of Prof. Dan Halperin. He is currently a postdoctoral researcher at Carnegie Mellon University working with Siddhartha Srinivasa and Maxim Likhachev. His research focus is robot motion planning. Specifically, his research focuses on revisiting classical computer science algorithms, tools and paradigms to address the computational challenges that arise when planning motions for real-world robots. Combining techniques from diverse domains such as computational geometry, surface simplification, random geometric graphs, graph theory and machine learning, he strives to provide efficient algorithms with rigorous analysis for robot systems with many degrees of freedom moving in tight quarters. He earned his BSc with honors from the Technion and his MSc with honors from Tel Aviv University.