Autumn 2013 Colloquium

Organizers: Maya Cakmak

How should a robot perceive the world?
Ashutosh Saxena (Cornell University) 10/11/2013

Abstract: In order to perform assistive tasks, a robot should perceive a functional understanding of the environment. This comprises learning how the objects in the environment could be used (i.e., their affordances). In this talk, I will discuss what types of object representations could be useful. One challenge is to model the object's context with each other and with the (hidden) humans. In order to model such data, I will present Infinite Latent CRFs (ILCRFs) that allow modeling the data with different plausible graph structures. Unlike CRFs, where the graph structure is fixed, ILCRFs learn distributions over possible graph structures in an unsupervised manner. We then show that our idea of modeling environments using object affordances and hidden humans is not only useful for robot manipulation tasks such as arranging a disorganized house, haptic manipulation, and unloading items from a dishwasher, but also in significantly improving standard robotic tasks such as scene segmentation, 3D object detection, human activity detection and anticipation, and task and path planning.

Biography: Ashutosh Saxena is an assistant professor in computer science department at Cornell University. His research interests include machine learning and robotics perception, especially in the domain of personal robotics. He received his MS in 2006 and Ph.D. in 2009 from Stanford University, and his B.Tech. in 2004 from Indian Institute of Technology (IIT) Kanpur. He is a recipient of National Talent Scholar award in India, Google Faculty award, Alfred P. Sloan Fellowship, Microsoft Faculty Fellowship, and NSF Career award. In the past, Ashutosh developed Make3D (http://make3d.cs.cornell.edu), an algorithm that converts a single photograph into a 3D model. Tens of thousands of users used this technology to convert their pictures to 3D. He has also developed algorithms that enable robots (such as STAIR, POLAR, see http://pr.cs.cornell.edu) to perform household chores such as unload items from a dishwasher, place items in a fridge, etc. His work has received substantial amount of attention in popular press, including the front-page of New York Times, BBC, ABC, New Scientist, Discovery Science, and Wired Magazine. He has won best paper awards in 3DRR, IEEE ACE and RSS, and was named a co-chair of the IEEE technical committee on robot learning.

UW/MSR Machine Learning Day
UW/MSR Machine Learning Day 10/18/2013
The Ultimate Machine: Strategies for understanding and improving movement disorders
Kat Steele (University of Washington, Mechanical Engineering) 10/25/2013

Abstract: The human body is the ultimate machine. With billions of connections, hundreds of actuators, and adaptive learning, the human body provides a unique and versatile platform for us to explore with the world. However, the same complexity that empowers the human body also makes it extremely difficult to treat when things go awry. For individuals with movement disorders, such as cerebral palsy and stroke, the ability to move, manipulate, and interact with the world is impaired and negatively impacts quality of life. In this talk, I will discuss how we have been using a combination of musculoskeletal simulation, medical imaging, and device design to understand how movement is altered after brain injury, evaluate the impacts of current treatments, and design new treatment strategies.

Biography: Kat Steele is an assistant professor in mechanical engineering at the University of Washington. Her research focuses on integrating dynamic simulation, motion analysis, medical imaging, and device designto improve mobility for individuals with movement disorders. She earned her BS in Engineering from the Colorado School of Mines and MS and PhD in Mechanical Engineering from Stanford University. To integrate engineering and medicine, she has worked extensively in hospitals including the Cleveland Clinic, Denver Children’s Hospital, Lucile Packard Children’s Hospital, and, for the past year, the Rehabilitation Institute of Chicago. She has also helped to develop a free, open-source software platform for dynamic simulation of movement (http://opensim.stanford.edu).

Towards seamless human-robot hand-overs
Maya Cakmak (University of Washington, CSE) 11/01/2013

Abstract: Handing over different objects to humans is a key functionality for robots that will assist or cooperate with humans. A robot could fetch objects for elderly living in their homes or hand tools to a worker in a factory. While there are infinite ways that a robot can transfer an object to a human, including very simple ones, achieving this seamlessly, like humans and objects to humans, is a challenge. This talk overviews two research projects that aim at characterizing robot hand-over actions that result in seamless object transfer. The first focuses on the efficiency and fluency of the hand-over and explores the notion of contrast in the hand-over action. The second focuses on the ease with which the object can be taken when presented by the robot and explores learning from demonstration to learn appropriate hand-over configurations. I present empirical results from human-robot interaction studies in both projects and conclude with recommendations for designing robot hand-over behaviors.

Biography: Maya Cakmak is an Assistant Professor in Computer Science and Engineering at the University of Washington. She received her Ph.D. in Robotics from the Georgia Institute of Technology in 2012 and she was a post-doctoral research fellow at Willow Garage afterwards. Maya's research aims to develop functionalities and interfaces for personal robots that can be programmed by their end-users to assist everyday tasks. Her work has been published at major Robotics and AI conferences and journals and has been featured in numerous media outlets.

Autonomous Assembly In a Human World
Ross A. Knepper (MIT) 11/07/2013

Abstract: The IkeaBot system autonomously plans and executes furniture assembly by incorporating capabilities for geometric reasoning, symbolic planning, multi-robot coordination, manipulation, and custom, modular tooling. After giving an overview of the basic system, I highlight two recent developments in IkeaBot designed to meet the challenge of operating in complex human environments. The first development, RF-Compass, is a centimeter-scale localization system that is based on inexpensive, off-the-shelf RFID technology. I explain how we overcame several of the challenges of RF-based localization to achieve this unprecedented accuracy. The second development is a system for handling the failures that inevitably occur during autonomous planning and execution. Failures take many forms, including errors in perception, reasoning, and action, as well as fundamental limitations of the robot hardware. Autonomous systems must detect, diagnose, and remedy failures when they occur. In cases where the robot cannot remedy the problem autonomously, it may ask a human to assist. It generates natural language help requests targeted at humans who may lack situational awareness of the failure or even the task. By grounding candidate requests in salient features of the environment and modeling how they would be understood by a human, we select the request that minimizes ambiguity.

Biography: Ross A. Knepper is a Research Scientist in the Distributed Robotics Laboratory at the Massachusetts Institute of Technology. His research focuses on the theory and algorithms of automated assembly. Taking IKEA furniture assembly as a challenge problem, he is exploring motion and task planning, manipulation, custom tooling, coordination, localization, failure handling, and human-robot interaction. Ross received his M.S and Ph.D. degrees in Robotics from Carnegie Mellon University in 2007 and 2011. Before his graduate education, Ross worked in industry at Compaq, where he designed high-performance algorithms for scalable multiprocessor systems; and also in commercialization at the National Robotics Engineering Center, where he adapted robotics technologies for customers in government and industry. Ross has served as a volunteer for Interpretation at Death Valley National Park, California.

Beyond Conditionals: Structured Prediction for Interacting Processes
Brian Ziebart (University of Illinois, Chicago) 11/08/2013

Abstract: The principle of maximum entropy provides a powerful framework for estimating joint, conditional, and marginal probability distributions. Markov random fields and conditional random fields can be viewed as the maximum entropy approach in action. However, beyond joint and conditional distributions, there are many other important distributions with elements of interaction and feedback where its applicability has not been established. In this talk, I will present the principle of maximum causal entropy—an approach based on directed information theory for estimating an unknown process based on its interactions with a known process. I will discuss applications of this approach to assistive technologies and human-robot interaction.

Biography: Brian Ziebart is an Assistant Professor in the Department of Computer Science at the University of Illinois at Chicago. He received his PhD in Machine Learning from Carnegie Mellon University in 2010, where he was also a postdoctoral fellow. His research has been recognized with best paper awards, runner-ups, and nominations at ICML (2010, 2011), ECCV (2012), and IUI (2012).

Considerations for Designing Assistive Robotics to Promote Aging-in-Place
Jenay Beer (University of South Carolina) 11/08/2013

Abstract: Many older adults wish to age-in-place, that is, to remain in their own homes as they age. However, challenges threaten an older adult’s ability to age-in-place. In fact, even healthy independently living older adults experience challenges in maintaining their home. Challenges with aging in place can be compensated through technology, such as home assistive robots. However, for home robots to be adopted by older adult users they must be designed to meet older adults’ needs for assistance and the older users must be amenable to robot assistance for those needs. I will discuss a range of projects (both quantitative and qualitative in nature) assessing older adults’ social interpretation, attitudes, and acceptance of assistive robotics. Study findings suggest that older adults’ assistance preferences discriminated between tasks, and the data suggest insights as the why older adults hold such preferences. The talk will detail multidisciplinary approaches to studying human-robot interaction (HRI) and how findings from user studies can apply to preliminary design recommendations for future assistive robots to support aging-in-place.

Biography: Jenay Beer is an Engineering Psychologist and an Assistant Professor with a joint appointment in the Department of Computer Science and Engineering and the College of Social Work at the University of South Carolina. She is the director of the Assistive Robotics and Technology Lab, and a member of the USC SeniorSmart initiative. Her research intersects the fields of Human Robot Interaction (HRI) and Psychology. Specifically, she studies home-based robots designed to assist older adults to maintain their independence and age-in-place. She has studied a variety of robotic systems and topics such as emotion expression of agents, user acceptance of robots, healthcare robotics, and the role of robot autonomy in HRI. Jenay has published and presented at a number of major human factors- and HRI-related conferences, including Human-Robot Interaction and Human Factors and Ergonomics Society. Her work has been featured with TEDxGeorgiaTech 2012, CNET, and WABE NPR News. She has been awarded the American Psychological Association (APA) Early Graduate Student Researcher Award in 2010, and has been selected as a Georgia Tech Foley Scholar Finalist two years in a row, 2011 and 2012. Jenay received a B.A. degree in Psychology from the University of Dayton, Ohio, in 2006. She also earned an M.S. and Ph.D. in Engineering Psychology from the Georgia Institute of Technology in 2010 and 2013 respectively.

Navigation for telepresence robots and some thoughts on robot learning
Dinei Florencio (Microsoft Research) 11/15/2013

Abstract: This informal talk will be divided in three parts, corresponding to three papers (IROS’12, ICRA’13 and IROS’13) that cover work on telepresence robots, and learning aspects of HRI. We first present a method for a mobile robot to follow a person autonomously where there is an interaction between the robot and human during following. Contrary to traditional motion planning, instead of determining goal points close to the person, we introduce a task dependent goal function which provides a map of desirable areas for the robot to be at, with respect to the person. We implemented our approach on a telepresence robot and conducted a controlled user study to evaluate the experiences of the users on the remote end of the telepresence robot. On the second part, we investigate “semi-autonomous driving” for a telepresence robot. Traditional aided driving is mostly based on “collision avoidance”, i.e., it limits or avoids movements that would lead to a collision. Instead, we borrow concepts from collaborative driving, and use the input from the operator as a general guidance to the target direction, then couple that with a variable degree of autonomy to the robot, depending on the task and the environment. Finally, in the third part possible we investigate the problem of making a robot learn how to approach a person in order to increase the chance of a successful engagement. We propose the use of Gaussian Process Regression (GPR), combined with ideas from reinforcement learning to make sure the space is properly and continuously explored. In the proposed example scenario, this is used by the robot to predict the best decisions in relation to its position in the environment and approach distance, each one accordingly to a certain time of the day.

Biography: Dinei Florêncio received the B.S. and M.S. from University of Brasília (Brazil), and the Ph.D. from Georgia Tech, all in Electrical Engineering. He is a researcher with Microsoft Research since 1999, currently with the Multimedia, Interaction, and Communication group. From 1996 to 1999, he was a member of the research staff at the David Sarnoff Research Center. He was also a co-op student with AT&T Human Interface Lab (now part of NCR) from 1994 to 1996, and a summer intern at the (now defunct) Interval Research in 1994. Dr. Florencio’s current research focus includes signal processing and computer security. In the area of signal processing, he works in audio and video processing, with particular focus to real time communication. He has numerous contributions in Speech Enhancement, Microphone arrays, Image and video coding, Spectral Analysis, and non-linear algorithms. In the area of computer security, his interest focuses in cybercrime and problems that can be assisted by algorithmic research. Topics include phishing prevention, user authentication, sender authentication, human interactive proofs, and economics of cybercrime. Dr. Florencio has published over 100 referred papers, and 50 granted US patents (with another 13 currently pending). His papers received awards at MMSP’09, ICME’2010, SOUPS’2010, MMSP’12. He also received the 1998 Sarnoff Achievement Award, the 1996 SAIC best paper award, and an NCR inventor award. His research has enhanced the lives of millions of people, through high impact technology transfers to many Microsoft products, including Live Messenger, Exchange Server, RoundTable, and the MSN toolbar. Dr. Florencio was general co-chair of CBSP’08, MMSP'09, Hot3D’10, and WIFS’11, and technical chair of WIFS’10, ICME’11, and MMSP’13. Dr. Florencio is a senior member of the IEEE, an elected member of the IEEE Information Forensics and Security Technical Committee, and of the IEEE Multimedia and Signal Processing Technical Committee (for which he will serve as chair for 2014-15). He is also a member of the IEEE ICME steering committee, and an associate editor of the IEEE Transactions on Information Forensics and Security.

Semantic Knowledge in Mobile Robotics: Perception, Reasoning, Communication and Actions
Andrzej Pronobis (University of Washington, CSE) 11/22/2013

Abstract: As robotic technologies mature, we can imagine an increasing number of applications in which robots would be useful in human environments and in human-robot collaborative scenarios. In fact, many believe that it is in the area of service and domestic robotics that we will see the largest growth within the next few years. A fundamental capability for such systems is to understand the dynamic, complex and unstructured human environments in which they are to operate. Such understanding is not only crucial for solving typical human tasks efficiently. More importantly, it can support communication and cooperation with untrained human users. In this talk, I will discuss how such understanding can be achieved by combining uncertain multi-modal robotic perception with a probabilistic relational representation of human semantic concepts transferred from Internet databases. I will present a semantic mapping algorithm combining information such as object observations, shape, appearance of rooms and human input with conceptual common-sense knowledge and show its ability to infer semantic room categories, predict existence of objects as well as reason about unexplored space. Furthermore, I will show that exploiting uncertain semantics can lead to more efficient strategies for solving real-world problems in large-scale realistic environments previously unknown to the robot. Finally, I will highlight our current work on integration of semantic spatial understanding and reasoning with language- and gesture-based human-robot interaction.

Biography: Andrzej Pronobis is a research associate in the Robotics and State Estimation Lab at the University of Washington working with Dieter Fox. His research is focused on the development of perception and spatial understanding mechanisms for mobile robots and their interplay with components responsible for interaction with the world and human users. Before joining UW, he was the Head of Research at OculusAI Technologies AB, a Swedish company developing mobile, cloud-based computer vision solutions. Between 2006 and 2012, he was involved in three large EU robotics research initiatives CogX, CoSy, and DIRAC. He obtained his PhD in Computer Vision and Robotics from KTH Royal Institute of Technology, Stockholm, Sweden in 2011 and his M.Sc. in Computer Science from the Silesian University of Technology, Gliwice, Poland in 2005. He is an author of 40 publications in the areas of robotics, computer vision and machine learning and an organizer or several international events related to robotics and computer vision research, including workshops and competitions.

It's Time for Service Robots
Steve Cousins (Savioke, Inc. & Willow Garage, Inc.) 12/06/2013

Abstract: Willow Garage laid the foundation for a new industry in personal robotics by creating the Robot Operating System and spinning off a number of companies to seed the new industry. This new industry has the potential to change the world the way the PC industry did 30 years ago. We have very capable personal robots becoming available at ever-decreasing price points, and new low-cost sensors and actuators all the time. What has changed in the 30 years since the IBM PC was introduced is our understanding of the value of open source software, the power of crowd-sourcing, and our ability to personalize to satisfy the "long tail" of demand. Like the PC, personal robots will probably make their debut in businesses and then later find their way into homes. The first ones will likely show up in the service industry.

Biography: Steve Cousins, CEO, was formerly the President and CEO of Willow Garage. During his tenure, Willow Garage created the PR2 robot, the open source TurtleBot, and the robot operating system (ROS), and spun off 8 companies: Suitable Technologies (maker of the Beam remote presence system) Industrial Perception, Inc. Redwood Robotics HiDOF (ROS and robotics consulting) Unbounded Robotics The Open Source Robotics Foundation The OpenCV Foundation The Open Perception Foundation. Before joining Willow Garage, Steve was a senior manager at IBM's Almaden Research Center, and a member of the senior staff at Xerox PARC. Steve holds a Ph.D. from Stanford University, and BS and MS degrees in computer science from Washington University.