Organizers: Tapomayukh Bhattacharjee, Maya Cakmak, Dieter Fox, Siddhartha S. Srinivasa
Abstract: Robust information exchange and trusted coordination are both critical needs for multi-robot systems acting in the real world. While these needs are universal across platforms, the computing and sensing resources of these platforms are not – making effective coordination difficult to enable, to scale, and to secure. This talk will present new methods of security and adaptive network formation for resource-constrained, mobile multi-robot systems (applications include delivery drones, mobile IoT, and robotic vehicles). The focus of this work is at the intersection of robotics and communication, and in particular, we study ways that communication technologies can be used to make resource-constrained multi-robot systems more capable. This talk will touch upon our developed technologies in 1) position control algorithms for multiple robots to achieve high data rate networks and 2) development of a virtual sensor for bi-directional Synthetic Aperture Radar between two communicating agents. Building upon these technologies, we develop a theoretical and experimental framework for provably thwarting spoofing attacks using communicated wireless signals in various important multi-agent tasks such as consensus, coverage, and drone delivery. This talk will have a particular focus on our most recent results in securing multi-agent consensus.
Biography: Stephanie is currently an Assistant Professor in the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University (Jan 2018). Prior, she was a research scientist in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT where she also completed her Ph.D. work (2014) on multi-robot coordination and control and M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.
Abstract: Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, most robots do not have this kind of adaptability, and yet, as our expectations of robots' interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans. In this talk I will describe my approaches to the problem of task transfer, enabling a robot to transfer a known task model to address scenarios containing differences in the objects used, object configurations, and task constraints. The primary contribution of my work is a series of algorithms for deriving and modeling domain-specific task information from structured interaction with a human teacher. In doing so, this work enables the robot to leverage the teacher's domain knowledge of the task (such as the contextual use of an object or tool) in order to address a range of tasks without requiring extensive exploration or re-training of the task. By enabling a robot to ask for help in addressing unfamiliar problems, my work contributes toward a future of adaptive, collaborative robots.
Biography: Tesca Fitzgerald is a Computer Science PhD candidate in the School of Interactive Computing at the Georgia Institute of Technology. In her PhD, she has been developing algorithms and knowledge representations for robots to learn, adapt, and reuse task knowledge through interaction with a human teacher. In doing so, she applies concepts of social learning and cognition to develop a robot which adapts to human environments. Tesca is co-advised by Dr. Ashok Goel (director of the Design and Intelligence Lab) and Dr. Andrea Thomaz (director of the Socially Intelligent Machines Lab). Before joining Georgia Tech in 2013, she graduated from Portland State University with a B.Sc. in Computer Science. Tesca is an NSF Graduate Research Fellow (2014), Microsoft Graduate Women Scholar (2014), and IBM Ph.D. Fellow (2017).
Abstract: In the future, everything will move. As robots come out of the factory and into human-facing contexts, how will this movement change the way human-built environments feel? Will these changes make occupants feel like standing a busy street corner or sitting by a babbling brook? The Robotics, Automation, and Dance (RAD) Lab uses the theater, and live dance performance, as a place to begin to answer these questions about the effects of non-verbal motion on the human psyche. This talk will discuss activities in the performing arts and the design of robotic systems, particularly the motion of these systems. This work highlights how broadening the palette of available robotic motion may facilitate the design of spaces of the future. Critically, the talk will discuss the interrelationship between function and expression in movement, as explicated in the Laban/Bartenieff Movement System, which highlights the role of context in how humans make meaning from motion. As motivation, the talk will pull from applications including collaborative manufacturing robots, expressive bomb-defusal robots, office building energy monitors, and care-giving robots that help the elderly age-in-place. Wednesday's DUB talk is meant to provoke questions that will be further explored in a workshop on Thursday and presented in a more satisfying academic frame, offering some answers, in Friday’s Robotics Colloquium.
Biography: Amy LaViers is an assistant professor in the Mechanical Science and Engineering Department at the University of Illinois at Urbana-Champaign (UIUC) and director of the Robotics, Automation, and Dance (RAD) Lab. She is a recipient of a 2015 DARPA Young Faculty Award (YFA) and 2017 Director’s Fellowship. Her teaching has been recognized on UIUC’s list of Teachers Ranked as Excellent by Their Students, with Outstanding distinction. Her choreography has been presented at the Merce Cunningham Dance Studio and in the DANCE NOW Joe’s Pub Festival at The Public Theater in New York City. She is a co-founder of two startup companies: AE Machines, Inc, an automation software company that won Product Design of the Year at the 4th Revolution Awards in Chicago and was a finalist for Robot of the Year at Station F in Paris, and caali, LLC, an embodied media company. She completed a two-year Certification in Movement Analysis (CMA) in 2016 at the Laban/Bartenieff Institute of Movement Studies (LIMS). Prior to UIUC she held a position as an assistant professor in systems and information engineering at the University of Virginia. She completed her Ph.D. in electrical and computer engineering at Georgia Tech with a dissertation that included a live performance exploring stylized motion. Her research began in her undergraduate thesis at Princeton University where she earned a certificate in dance and a degree in mechanical and aerospace engineering.
Abstract: Construction is one of the largest industries on the planet, employing more than 10M workers in the US each year. Yet construction is also the second least-digitized industry; most of the work is still performed with manual labor, paper-based processes, and methods that haven't changed for millennia. Dusty Robotics aspires to modernize the industry by developing robot-powered tools that automate tasks on construction sites, starting with layout automation. In this talk I'll tell the story of how Dusty Robotics originated, our journey through the customer discovery process, and our vision for how robotics will change the face of construction.
Biography: Dr. Tessa Lau is an experienced entrepreneur with expertise in AI, machine learning, and robotics. She is currently Founder/CEO at Dusty Robotics, whose mission is to address construction industry productivity by introducing robotic automation on the jobsite. Prior to Dusty, she was CTO/co-founder at Savioke, where she orchestrated the deployment of 75+ delivery robots into hotels and high-rises. Previously, Dr. Lau was a Research Scientist at Willow Garage, where she developed simple interfaces for personal robots. She also spent 11 years at IBM Research working in business process automation and knowledge capture. More generally, Dr. Lau is interested in developing technology that gives people super-powers, and building businesses that bring that technology into people’s lives. Dr. Lau holds a PhD in Computer Science from the University of Washington.
Abstract: Research on Human-robot Interaction to date has largely focused on examining a single human interacting with a single robot. This work has led to advances in fundamental understanding about the psychology of human-robot interaction (e.g. how specific design choices affect interactions with and attitudes towards robots) and about the effective design of human-robot interaction (e,g. how novel mechanisms or computational tools can be used to improve HRI). However, the single-robot-single-human focus of this growing body of work stands in stark contrast to the complex social contexts in which robots are increasingly placed. While robots increasingly support teamwork across a wide range of settings covering search and rescue missions, minimally invasive surgeries, space exploration missions, or manufacturing, we have limited understanding of how groups people will interact with robots and how robots will affect how people interact with each other in groups and teams. In this talk I present empirical findings from several studies that show how robots can shape in direct but also subtle ways how people interact and collaborate with each other in teams.
Biography: Malte Jung is an Assistant Professor in Information Science at Cornell University and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow. His research focuses on the intersections of teamwork, robots, and emotion. The goal of his research is to inform our basic understanding of robots in teams as well as to inform how we design technology to support teamwork across a wide range of settings. Malte Jung received his Ph.D. in Mechanical Engineering from Stanford University. Prior to joining Cornell, Malte Jung completed a postdoc at the Center for Work, Technology, and Organization at Stanford University. He holds a Diploma in Mechanical Engineering from the Technical University of Munich.
Abstract: Humans have a remarkable way of learning, adapting and mastering new manipulation tasks. With the current advances in Machine Learning (ML), the promise of having robots with such capabilities seems to be on the cusp of reality. Transferring human-level skills to robots, however, is complicated as they involve a level of complexity that cannot be tackled by classical ML methods in an unsupervised way. Such complexities involve: (i) automatically decomposing tasks into control-oriented encodings, (ii) extracting invariances and handling idiosyncrasies of data acquired from human demonstrations and (iii) learning models that guarantee stability and convergence. The main goal of my research is to devise novel techniques to learn complex tasks from demonstrations, overcoming the aforementioned challenges with (i) a high-level of autonomy during learning, while (ii) providing adaptability during execution. To provide such capabilities we propose learning and control strategies that step over traditional disciplinary boundaries, seamlessly blending concepts from control theory, robotics and machine learning. Specifically, the techniques presented in this talk leverage Bayesian non-parametrics and kernel-methods with dynamical system (DS) theory to solve challenging open problems in the Learning from Demonstration (LfD) domain. The first part of the talk will focus on learning complex sequential manipulations tasks from demonstrations. The particular challenge is learning these tasks without any prior knowledge on the number of actions or restriction as to how the human is demonstrating the task. We showcase these algorithms on two cooking case studies in which robots are taught to roll pizza dough and peel vegetables in an almost autonomous fashion. The second part of the talk will focus on the development of novel DS formulations and learning schemes that are capable of representing and executing a complex task with a single model without the need for switching or task discretization. The type of tasks that can be learned with these new approaches are un-paralleled to previous work in DS-based LfD and are validated on production line and household activities, as well as adaptive navigation strategies for mobile agents and locomotion and co-manipulation tasks of biped robots. Finally, we will showcase novel joint-space learning strategies that are used to resolve for kinematic singularities and self-collision avoidance in multi-arm robotic systems.
Biography: Nadia Figueroa is a senior Ph.D. student in the Learning Algorithms and Systems Laboratory (LASA) at the Swiss Federal Institute of Technology in Lausanne (EPFL). Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and a Student Research Assistant (2011-2012) at the Institute of Robotics and Mechatronics (RMC) of the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany. Her research focuses on leveraging machine learning techniques with concepts from dynamical systems theory to solve salient problems in the areas of learning from demonstration, incremental/interactive learning, human-robot collaboration, multi-robot coordination, shared autonomy and control.
Abstract: Aerial cinematography is revolutionizing industries that require live and dynamic camera viewpoints such as entertainment, sports, and security. However, safely piloting a drone while filming a moving target in the presence of obstacles is immensely taxing, often requiring multiple expert human operators. Hence, there is demand for an autonomous cinematographer that can reason about both geometry and scene context in real-time. Existing approaches do not address all aspects of this problem; they either require high-precision motion-capture systems or GPS tags to localize targets, rely on prior maps of the environment, plan for small time horizons, or only follow artistic guidelines specified before flight. In this talk, I will address the problem in its entirety and describe a complete system for real-time aerial cinematography that for the first time combines: (1) vision-based target estimation; (2) 3D signed-distance mapping for occlusion estimation; (3) efficient trajectory optimization for long time-horizon camera motion; and (4) learning-based artistic shot selection. We extensively evaluate our system both in simulation and in field experiments by filming dynamic targets moving through unstructured environments. Our results indicate that our a system can operate reliably in the real world without restrictive assumptions. We also provide in-depth analysis and discussions for each module, with the hope that our design tradeoffs can generalize to other robotics applications. Videos of the complete system can be found at: https://youtu.be/ookhHnqmlaU
Biography: Rogerio Bonatti is a Ph.D. student at The Robotics Institute, Carnegie Mellon University, advised by Prof. Sebastian Scherer. Prior to CMU, Rogerio earned a degree in Mechatronics Engineering from University of São Paulo and spent one year in a study-abroad program at Cornell University. His research focuses on developing robust real-life AI for robots, combining motion planning with machine learning and computer vision. His past work includes fully autonomous drones for cinematography. Rogerio is currently interning at Microsoft Research, and before the Ph.D. he interned at McKinsey & Co.