Organizers: Selest Nashef, Josh Smith, Byron Boots, Maya Cakmak, Dieter Fox, Sam Burden, Siddhartha S. Srinivasa
Abstract: Advances in machine learning have fueled progress towards deploying real-world robots from assembly lines to self-driving. However, if robots are to truly work alongside humans in the wild, they need to solve fundamental challenges that go beyond collecting large-scale datasets. Robots must continually improve and learn online to adapt to individual human preferences. How do we design robots that both understand and learn from natural human interactions?
Biography: Sanjiban Choudhury is a Research Scientist at Aurora Innovation and soon-to-be Assistant Professor at Cornell University. His research goal is to enable robots to work seamlessly alongside human partners in the wild. To this end, his work focuses on imitation learning, decision making and human-robot interaction. He obtained his Ph.D. in Robotics from Carnegie Mellon University and was a Postdoctoral fellow at the University of Washington. His research has received best paper awards at ICAPS 2019, finalist for IJRR 2018, and AHS 2014, and winner of the 2018 Howard Hughes award. He is a Siebel Scholar, class of 2013.
Abstract: Traditionally, inverse dynamics refers to the problem of reconstructing the forces in a dynamic system from its kinematic motion and has wide applications in robotics, biomechanics, and computer graphics. This talk considers a more broadened definition of inverse dynamics that infers various computational design parameters of rigid-body, deformable-body, and fluidic dynamic systems. To solve this challenging problem, we develop a series of computational tools that unleash the full power of analytical gradients from a physics simulator in many non-traditional ways. First, we demonstrate the usage of gradients in exploring the shape and controller design space of rigid and soft robots. Next, we discuss transferring these computational designs to hardware and show the power of gradients in constructing digital twins of two such real-world robots: a rigid-body quadrotor and a deformable-body underwater robot. We end this talk by envisioning future opportunities for physics simulation gradients in computational fabrication, robotics, and machine learning.
Biography: Tao Du is a Postdoctoral Associate at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), working with Professor Wojciech Matusik and Professor Daniela Rus. His research aims to combine physics simulation, machine learning, and numerical optimization techniques to solve real-world inverse dynamics problems. His representative works include building differentiable simulation platforms for graphics and robotics research, developing computational design pipelines for real-world robots, and understanding the simulation-to-reality gap of dynamic systems. His work has been published in top-tier graphics, learning, and robotics journals and conferences and has been featured by major technical media outlets. Before continuing at MIT as a Postdoctoral Associate, Tao Du obtained his Ph.D. in Computer Science from MIT in 2021 and his Master's in Computer Science from Stanford in 2015.
Abstract: Robots in unstructured environments manipulate objects slowly and intermittently, relying on bursts of computation for planning. This is in stark contrast to humans who routinely use fast dynamic motions to manipulate and move objects or vault power cords over chairs when vacuuming. Dynamic motions can speed task completion, manipulate objects out of reach, and increase reliability, but they require: (1) integrating grasp planning, motion planning, and time-parameterization, (2) lifting quasi-static assumptions, and (3) intermittent access to powerful computing. I will describe how integrating grasp analysis into motion planning can speed up motions, how integrating deep-learning can speed up computation, and how integrating inertial and learned constraints can lift quasi-static assumptions to allow high-speed manipulation. I will also describe how cloud computing can provide on-demand access to immense computing to speed up motion planning and a new cloud-robotics framework that makes it easy.
Biography: Jeffrey Ichnowski is a post-doctoral researcher in the RISE lab and AUTOLAB at the University of California at Berkeley. He researches algorithms and systems for high-speed motion, task, and grasp planning for robots, using cloud-based high-performance computing, optimization, and deep learning. Jeff has a Ph.D. in computational robotics from the University of North Carolina at Chapel Hill. Before returning to academia, he founded startups and was an engineering director and the principal architect at SuccessFactors, one of the world’s largest cloud-based software-as-a-service companies
Abstract: Most cameras today capture images without considering scene content. In contrast, animal eyes have fast mechanical movements that control how the scene is imaged in detail by the fovea, where visual acuity is highest. The prevalence of active vision during biological imaging, and the wide variety of it, makes it very clear that this is an effective visual design strategy. In this talk, I cover our recent work on creating *both* new camera designs and novel vision algorithms to enable adaptive and selective active vision and imaging inside cameras and sensors.
Biography: Sanjeev Koppal is an Associate Professor at the University of Florida’s Electrical and Computer Engineering Department. He also holds a UF Term Professor Award for 2021-24. Sanjeev is the Director of the FOCUS Lab at UF. Prior to joining UF, he was a researcher at the Texas Instruments Imaging R&D lab. Sanjeev obtained his Masters and Ph.D. degrees from the Robotics Institute at Carnegie Mellon University. After CMU, he was a postdoctoral research associate in the School of Engineering and Applied Sciences at Harvard University. He received his B.S. degree from the University of Southern California in 2003 as a Trustee Scholar. He is a co-author on best student paper awards for ECCV 2016 and NEMS 2018, and work from his FOCUS lab was a CVPR 2019 best-paper finalist. Sanjeev won an NSF CAREER award in 2020 and is an IEEE Senior Member. His interests span computer vision, computational photography and optics, novel cameras and sensors, 3D reconstruction, physics-based vision, and active illumination.
Abstract: Robots are pretty great -- they can make some hard tasks easy, some dangerous tasks safe, or some unthinkable tasks possible. And they're just plain fun to boot. But how many robots have you interacted with recently? And where do you think that puts you compared to the rest of the world's people? In contrast to computation, automating physical interactions continues to be limited in scope and breadth. I'd like to change that. But in particular, I'd like to do so in a way that's accessible to everyone, everywhere. In our lab, we work to lower barriers to robotics design, creation, and operation through material and mechanism design, computational tools, and mathematical analysis. We hope that with our efforts, everyone will be soon able to enjoy the benefits of robotics to work, to learn, and to play.
Biography: Prof. Ankur Mehta is an assistant professor of Electrical and Computer Engineering at UCLA, and directs the Laboratory for Embedded Machines and Ubiquitous Robots (LEMUR). Pushing towards his visions of a future filled with robots, his research interests involve printable robotics, rapid design and fabrication, control systems, and multi-agent networks. He has received the NSF CAREER award and a Samueli fellowship, and has received best paper awards in the IEEE Robotics & Automation Magazine and the International Conference on Intelligent Robots and Systems (IROS).
Abstract: The pandemic exacerbated inequities faced by people with disabilities and healthcare workers — both are at high risk of adverse physical and mental health outcomes. Robots alone are not going to fix these major societal problems; however, our work explores how we can design technology to lessen the burden of systemic ableism and healthcare system stress. I will discuss several of our recent projects in acute care and community health contexts. In acute care, we are building hospital-based robots to support the clinical workforce, to support item delivery, telemedicine, and decision support. In community health, we are creating interactive and adaptive systems that aim to extend the reach of cognitive neurorehabilitative therapies, provide respite to overburdened caregivers, and explore how technology might serve as a means for mediating positive interactions during hardship. We focus on building robots that can adaptively team with and longitudinally learn from people, and personalize and tailor their behavior.
Biography: Dr. Laurel Riek is a professor in Computer Science and Engineering at the University of California, San Diego, with a joint appointment in the Department of Emergency Medicine, and is affiliated with the Contextual Robotics Institute and Design Lab. Dr. Riek directs the Healthcare Robotics Lab and leads research in human-robot teaming and health informatics, with a focus on autonomous robots that work proximately with people. Riek's current research interests include long term learning, robot perception, and personalization; with applications in acute care, neurorehabilitation, and home health. Dr. Riek received a Ph.D. in Computer Science from the University of Cambridge, and B.S. in Logic and Computation from Carnegie Mellon. Riek served as a Senior Artificial Intelligence Engineer and Roboticist at The MITRE Corporation from 2000-2008, working on learning and vision systems for robots. Dr. Riek has received the NSF CAREER Award, AFOSR Young Investigator Award, Qualcomm Research Award, and several best paper awards. Dr. Riek is the HRI 2023 General Co-Chair and served as the Program Co-Chair for HRI 2020, and serves on the editorial boards of T-RO and THRI.
Abstract: The symposium will be an all-day affair featuring invited short talks, posters, and social events, aiming to make up for two-plus years of missed in-person networking as a result of the pandemic. Students, postdocs, industry researchers, and faculty are all encouraged to participate. The event is free.
Abstract: Enabling robots to perform multi-stage forceful manipulation tasks, such as twisting a nut on a bolt or pulling a nail with a hammer claw, requires enabling reasoning over interlocking force and motion constraints over discrete and continuous choices. I categorize forceful manipulation as tasks where exerting substantial forces is necessary to complete the task. While all actions with contact involve forces, I focus on tasks where generating and transmitting forces is a limiting factor of the task that must be reasoned over and planned for. I'll first formalize constraints for forceful manipulation tasks where the goal is to exert force, often through a tool, on an object or the environment. These constraints define a task and motion planning problem that we solve to search for both the sequences of discrete actions, or strategy, and the continuous parameters of those actions.
Biography: Rachel Holladay is an EECS PhD Student at MIT, where she is a member of the LIS (Learning and Intelligent Systems) Group and the MCube Lab (Manipulation and Mechanisms at MIT). She is interested in developing algorithms for dexterous and composable robotic manipulation and planning. In particular, her doctoral research focuses on enabling robots to complete multi-step manipulation tasks that require reasoning over both force and motion constraints. She received her Bachelor's degree in Computer Science and Robotics from Carnegie Mellon.