Winter 2014 Colloquium

Organizers: Maya Cakmak

Learning Better Models of Dynamical Systems
Byron Boots (UW CSE) 01/17/2014

Abstract: The majority of sequential data in scientific and technological domains is high-dimensional, noisy, and collected in a raw and unstructured form. In order to interpret, track, predict, or control such data, we need to hypothesize a model. For this purpose, an appealing model representation is a dynamical system. Although we can sometimes use extensive domain knowledge to write down a dynamical system, specifying a model by hand can be a time consuming process. This motivates an alternative approach: learning dynamical systems directly from sensor data. Unfortunately, this is hard. To discover the right state representation and model parameters, we must solve difficult temporal and structural credit assignment problems. In addition, popular maximum likelihood based approaches to learning dynamical systems often must contend with optimization environments that are plagued with bad local optima. In this talk, I will present a number of tools that we have used to learn better dynamical system models including predictive representations, moment-based learning algorithms, and kernel methods. These tools have allowed us to design a family of learning algorithms that are computationally efficient, statistically consistent, and have no local optima; in addition, they can be simple to implement, and have state-of-the-art practical performance for some interesting learning problems.

Biography: Byron Boots is a postdoctoral research associate in the Robotics and State Estimation Lab at the University of Washington working with Dieter Fox. He received his Ph.D. in Machine Learning from Carnegie Mellon University in 2012 under Geoffrey Gordon. His research focuses on on statistical machine learning, artificial intelligence, and robotics. Byron’s work won the 2010 Best Paper award at the International Conference on Machine Learning (ICML-2010).

Integrating Robots into Team-Oriented Environments
Julie Shah (MIT) 01/24/2014

Abstract: Recent advances in computation, sensing, and hardware enable robotics to perform an increasing percentage of traditionally manual tasks in manufacturing. Yet, often the assembly mechanic cannot be removed entirely from the process. This provides new economic motivation to explore opportunities where assembly mechanics and industrial robots may work in close physical collaboration. In this talk, I present adaptive work-sharing and scheduling algorithms to collaborate with industrial robots on two levels: one-to-one human robot teamwork, and factory-level sequencing and scheduling of human and robotic tasks. I discuss our recent work developing adaptive control methods that incorporate high-level, person-specific planning and execution mechanisms to promote predictable, convergent team behavior. We apply human factors modeling coupled with statistical methods for planning and control to derive quantitative methods for assessing the quality and convergence of learnt teaming models, and to perform risk-sensitive robot control on the production line. I also discuss computationally efficient methods for coordinating human and robotic sequencing and scheduling at the factory-level. Tight integration of human workers and robotic resources involves complex dependencies. Even relatively small increases in process time variability lead to schedule inefficiencies and performance degradation. Our methods allow fast, dynamic computation of robot tasking and scheduling to respond to people working and coordinating in shared physical space, and provide real-time guarantees that schedule deadlines and other operational constraints will be met.

Biography: Julie Shah is an Assistant Professor in the Department of Aeronautics and Astronautics and leads the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory. Shah received her SB (2004) and SM (2006) from the Department of Aeronautics and Astronautics at MIT, and her PhD (2010) in Autonomous Systems from MIT. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. She has developed innovative methods for enabling fluid human-robot teamwork in time-critical, safety-critical domains, ranging from manufacturing to surgery to space exploration. Her group draws on expertise in artificial intelligence, human factors, and systems engineering to develop interactive robots that emulate the qualities of effective human team members to improve the efficiency of human-robot teamwork. This work was recognized by the Technology Review as one of the 10 Breakthrough Technologies of 2013, and has received international recognition in the form of best paper awards and nominations from the International Conference on Automated Planning and Scheduling, the American Institute of Aeronautics and Astronautics, the IEEE/ACM International Conference on Human-Robot Interaction, and the International Symposium on Robotics.

Robotics & The New Cyberlaw
Ryan Calo (UW Law) 01/31/2014

Abstract: The ascendance of the Internet wrought great social, cultural, and economic changes. It also launched the academic movement known as “cyberlaw.” The themes of this movement reflect the essential qualities of the Internet, i.e., the set of characteristics that distinguish the Internet from predecessor and constituent technologies. Now a new set of technologies is ascending, one with arguably different essential qualities. This project examines how the mainstreaming of robotics—for instance, drones and driverless cars—will affect legal and policy discourse, and explores whether cyberlaw is still the right home for the resulting doctrinal and academic conversation.

Biography: Professor Calo researches the intersection of law and emerging technology, with an emphasis on robotics and the Internet. His work on drones, driverless cars, privacy, and other topics has appeared in law reviews and major news outlets, including the New York Times, the Wall Street Journal, and NPR. Professor Calo has also testified before the full Judiciary Committee of the United States Senate. Professor Calo serves on numerous advisory boards, including the Electronic Privacy Information Center (EPIC), the Electronic Frontier Foundation (EFF), the Future of Privacy Forum, and National Robotics Week. Professor Calo co-chairs the Robotics and Artificial Intelligence committee of the American Bar Association and is a member of the Executive Committee of the American Association of Law Schools (AALS) Section on Internet and Computer Law. Professor Calo previously served as a director at the Stanford Law School Center for Internet and Society (CIS) where he remains an Affiliate Scholar. He also worked as an associate in the Washington, D.C. office of Covington & Burling LLP and clerked for the Honorable R. Guy Cole on the U.S. Court of Appeals for the Sixth Circuit. Prior to law school at the University of Michigan, Professor Calo investigated allegations of police misconduct in New York City.

Distributed Algorithms for Robot Recovery, Multi-Robot Triangulation, and Advanced Low-Cost Robots
James McLurkin (Rice University) 02/07/2014

Abstract: In this talk we present results from three different projects: 1. A distributed recovery algorithm to extract a multi-robot system from complex environments. The goal is to maintain network connectivity while allowing efficient recovery. Our approach uses a maximal-leaf spanning tree as a communication and navigation backbone, and routes robots along this tree to the goal. Simulation and experimental results demonstrate the efficacy of this approach. 2. Triangulations of regions is a staple technique in almost every geometric computation. When robots triangulate their workspace, they build a "physical data structure" that supports geometric and computational reasoning about the environment using the topology of the triangulated graph. We demonstrate multi-robot triangulation construction, dual-graph navigation, and patrolling using a distributed data structure. 3. We introduce the "r-one" robot, a low-cost design suitable for research, education, and outreach. We provide tales of joy and disaster from using 90 of these platforms for our research, college courses, and museum outreach exhibits.

Biography: James McLurkin is an Assistant Professor at Rice University in the Department of Computer Science. His research focuses on developing distributed algorithms for multi-robot systems, which is software that produces complex group behaviors from the interactions of many simple individuals. These ideas are not new: ants, bees, wasps, and termites have be running this type of software for 120 million years. His research group has one of the largest collections of robots in the world, with over 200 different robots at last count. The new r-one robots are an advanced open-source platform to support the "Robots for Everyone" movement. McLurkin was a Lead Research Scientist at iRobot, and was the 2003 recipient of the Lemelson-MIT student prize for invention. He holds a S.B. in Electrical Engineering with a Minor in Mechanical Engineering from M.I.T., a M.S. in Electrical Engineering from University of California, Berkeley, and a S.M. and Ph.D. in Computer Science from M.I.T.

Towards ubiquitous robots
Mihai Jalobeanu (Microsoft Research) 02/21/2014

Abstract: Despite significant advances in robotics research, commercial robots are nowhere near as pervasive as we hoped and imagined. This talk explores some of the reasons behind this apparent discrepancy and discusses the approach taken by Microsoft Robotics to bridge the gap between research and commercialization.

Biography: Mihai Jalobeanu leads the Microsoft Robotics development team. He joined Microsoft in 1998 as a software developer and worked on email servers and cloud services before joining the Robotics group in 2012. His interests include software reliability, large scale systems and machine learning, particularly as applied to autonomous navigation and manipulation. Mihai received a B.S. degree in computer engineering in 1996 and a M.S. degree in computer science in 1997, both from Technical University of Cluj-Napoca, Romania.

Talking to Robots: Learning to Ground Human Language in Perception and Execution
Cynthia Matuszek (UW CSE) 02/28/2014

Abstract: Advances in computation, sensing, and hardware are enabling robots to perform an increasing variety of tasks in ever less constrained settings. It is now possible to consider near-term robots that will operate in traditionally human-centric spaces. If these robots understand language, they can take instructions and learn about tasks from nonspecialists; at the same time, a robot's real-world interactions can help it learn to better understand physically grounded language. Combining these areas is a fundamentally multidisciplinary problem, involving natural language processing, machine learning, robotics, and human-robot interaction. In this talk, I describe my work on learning natural language in a physical context; such language, learned from end users, allows a person to communicate their needs in a natural, unscripted fashion. I demonstrate that this approach can enable a robot to follow directions, learn about novel objects in the world, and perform simple tasks such as navigating an unfamiliar map or putting away objects, entirely from examples provided by users.

Biography: Cynthia Matuszek is a Ph.D. candidate in the University of Washington Computer Science and Engineering department, where she is a member of both the Robotics and State Estimation lab and the Language, Interaction, and Learning group. She earned a B.S. in Computer Science from the University of Texas at Austin, and M.Sc. from the University of Washington in 2009. She is published in the areas of artificial intelligence, robotics, and human-robot interaction.

Social and Moral Relationships with Robots
Peter H. Kahn, Jr. (UW Psychology) 03/14/2014

Abstract: As social robots – and more broadly personified computational environments, as in the smart car and smart home of the future – become more prevalent, they will pose us with significant challenges, socially and morally. In this presentation, I’ll discuss some of my lab’s psychological research on this topic. My lab’s studies are in collaboration with Hiroshi Ishiguro and Takayuki Kanda, using ATR’s humanoid robot, Robovie. I’ll present and show video clips from three empirical studies, where we investigated 3 questions, respectively: (a) Do children and adults believe that humanoid robots can have moral standing? (b) Do adults hold humanoid robots morally accountable for causing harm to humans? and (c) Can adults form psychologically intimate relationships with humanoid robots such that they will keep a robot’s secret from a human experimenter? Then I’ll speak about a current project wherein we seek to provide a new vision for HRI: of how interacting with networked social robots can enhance human creativity. I’ll suggest that in HRI and more broadly HCI we need to hold out a vision of creating technology so that people can flourish. In my view, that involves integrating exponential technological growth with deep authentic connection with other humans, and with a natural world that we're destroying too quickly, and at our peril.

Biography: Peter H. Kahn, Jr. is Professor in the Department of Psychology and Director of the Human Interaction with Nature and Technological Systems (HINTS) Laboratory at the University of Washington. He is also Editor-in-Chief of the academic journal Ecopsychology. His research seeks to address two world trends that are powerfully reshaping human existence: (1) The degradation if not destruction of large parts of the natural world, and (2) unprecedented technological development, both in terms of its computational sophistication and pervasiveness. He received his Ph.D. from the University of California, Berkeley in 1988. His publications have appeared in such journals as Child Development, Developmental Psychology, Human-Computer Interaction, and Journal of Systems Software, as well as in such proceedings as CHI, HRI, and Ubicomp. His 5 books (all with MIT Press) include Technological Nature: Adaptation and the Future of Human Life (2011). His research projects are currently funded by The National Science Foundation (http://faculty.washington.edu/pkahn/).

Amazon Prime Air
Gur Kimchi (Amazon) 03/21/2014

Abstract: We're excited to share Prime Air — something our team has been working on in our next generation R&D lab right here in Seattle. The goal of this new delivery system is to get packages into customers' hands in 30 minutes or less using unmanned aerial vehicles. Putting Prime Air into commercial use will take some number of years as we advance the technology and work with the Federal Aviation Administration (FAA) on necessary rules and regulations. From a technology point of view, we'll be ready to enter commercial operations as soon as the necessary regulations are in place. One day, Prime Air vehicles will be as normal as seeing mail trucks on the road today.

Biography: Gur Kimchi is the VP of Profit Systems and Prime Air at Amazon.com. Gur joined Amazon’s Worldwide Retail Systems organization in 2012, building key platforms to manage Amazon’s “back office” and automating various retail processes. Gur leads the Prime Air team, a project garnering enormous attention since going public in November of 2013. The goal of Prime Air is to get packages into customers’ hands in 30 minutes or less using unmanned aerial vehicles. Prior to Amazon, Gur spent 10 years at Microsoft on the Contextual Mobile Search team, the MSN/Virtual Earth Core Platform team, and the Unified Communications team. He is a voracious reader, an avid skier, and enjoys spending time with his family.