The Robotics Colloquium features talks by invited and local researchers on all aspects of robotics, including control, perception, machine learning, mechanical design, and interaction. The colloquium is held Fridays between 1:30-2:30pm (virtually over UW Zoom - UW NetID required). Special seminars outside this schedule are indicated below.
If you would like to give a talk in upcoming Robotics Colloquia, please contact Karthik Desingh. If you would like to get regular email announcements and reminders about the robotics colloquium speakers, please sign up for the Robotics@UW mailing list.
Autumn 2020 Organizers: Karthik Desingh, Dieter Fox, Maya Cakmak, Siddhartha S. Srinivasa
Abstract: Although general purpose robotic manipulators are becoming more capable at manipulating various objects, their ability to manipulate millimeter-scale objects are usually limited. On the other hand, ultrasonic levitation devices have been shown to levitate a large range of small objects, from polystyrene balls to living organisms. By controlling the acoustic force fields, ultrasonic levitation devices can compensate for robotic manipulator positioning uncertainty and control the grasping force exerted on the target object. The material agnostic nature of acoustic levitation devices and their ability to dexterously manipulate millimeter-scale objects make them appealing as a grasping mode for general purpose robots. In this work, we present an ultrasonic, contact-less manipulation device that can be attached to or picked up by any general purpose robotic arm, enabling millimeter-scale manipulation with little to no modification to the robot itself. This device is capable of performing the very first phase-controlled picking action on acoustically reflective surfaces. With the manipulator placed around the target object, the manipulator can grasp objects smaller in size than the robot's positioning uncertainty, trap the object to resist air currents during robot movement, and dexterously hold a small and fragile object, like a flower bud. Due to the contact-less nature of the ultrasound-based gripper, a camera positioned to look into the cylinder can inspect the object without occlusion, facilitating accurate visual feature extraction.
Biography: Jared is an ECE graduate student in Joshua Smith’s Sensor System’s lab working on developing acoustic levitation devices that can control and manipulate objects within the context of robotics, medical devices, and scientific tools. He received his B.S. in Electrical Engineering from the University of Washington in 2018. Jared is interested in acoustic levitation devices that take advantage of high frequency sound to impart force on objects allowing these devices to move objects and overcome the force of gravity. The benefits of this manipulation method include contactless grasping, mechanical manipulator error compensation, grasping without object occlusion and safe manipulation of fragile and living organisms. These attributes lend itself to applications such as robotics, medical devices, scientific tools and advanced manufacturing systems.
Abstract: Object shape provides a strong prior over dynamics and behavior, enabling simulation of object dynamics when coupled with physics simulators. However, accurately inferring object shape from video in realistic, cluttered scenes remains an open problem. Existing 3D reconstruction algorithms often produce shapes that, while visually similar, yield dramatically different dynamics. Further, there is no general way to make inferences about object shape based on observed dyanmics. In this paper we propose a method for making object shape differentiable with respect to dynamics. Our method allows for training reconstruction neural networks to optimize how closely the simulated dynamics of reconstructions match observed dynamics, making them produce reconstructions more useful for simulation and planning. In addition, our method allows for optimizing estimated object shape to match observed dynamics, allowing agents to take advantage of highly informative motion cues.
Biography: William is a Ph.D. student in Computer Science at University of Washington. He is advised by Pedro Domingos and Sidd Srinivasa and supported by an NDSEG Fellowship. His research focuses on developing human priors for reinforcement learning, with projects in object oriented reinforcement learning.
Abstract: Faculty will introduce their lab members, give a quick overview of their research, and talk about working remotely.
Abstract: Assistive devices, such as orthoses, prostheses, and exoskeletons are commonly used to help individuals with motor impairments – such as children with cerebral palsy or stroke survivors – improve gait and walk more efficiently. However, predicting how specific individuals will adapt their gait pattern to novel device designs remains challenging. Time-intensive experimental device optimization is the most effective approach to tuning assistive devices for an individual. Modeling approaches ubiquitous in biomechanics often rely on assumptions about an individual’s physiology, which are often invalid for individuals with motor impairments, limiting the accuracy and utility of model predictions. Drawing inspiration from methods proposed to study dynamical systems and control bipedal robots expands our ability to quantitatively customize assistive devices in silico, without requiring prior knowledge of an individual’s physiology or motor control. In this talk, I discuss the data-driven and data-plus-physics-driven approaches that our lab has used to model and predict gait with ankle exoskeletons. First, I will show how we used phase-varying models to predict responses to ankle exoskeleton torque without knowledge of an individual’s physiology. I will then discuss our use of template models of locomotion and sparse regression to identify statistically-supported and interpretable reduced-order dynamics describing how an individual regulates locomotion. Data-driven modeling for gait analysis gait may generalize to other types of interventions, enabling individualized quantitative rehabilitation protocols to improve treatment efficacy and the individual’s mobility.
Biography: Michael is a PhD candidate in the University of Washington Mechanical Engineering Department, working under Dr. Kat Steele. Michael’s research uses data-driven methods to inform ankle exoskeleton design by modeling and predicting subject-specific changes in gait in response to varying ankle exoskeleton mechanical properties. In addition to his primary dissertation research, Michael has mentored undergraduate students on exoskeleton-focused research projects ranging from muscle synergy analysis to predictive musculoskeletal modeling. Michael completed his MS in mechanical engineering at the University of Washington and his BS in mechanical engineering at North Carolina State University. He is a recipient of an NSF Graduate Research Fellowship and a Predoctoral Training Fellowship from the Institute for Translational Health Sciences.
Abstract: During robot teleoperation where continuous guidance of robot movement is needed, such as during robot-assisted surgery or robot navigation of complex terrain, manual control of such robots using joysticks, mice, and handles is the norm. Understanding if and when emerging technologies like muscle or brain interfaces provide advantages over conventional manual interfaces requires modeling the human as part of the human-robot system. My research demonstrates the potential of muscle (electromyography, EMG) interfaces as an alternative input technique for humans to continuously control robots. I demonstrate how human input can be quantified as a response to error (feedback control) and future position (feedforward control) and how this method can be used to quantify human performance during a one-dimensional trajectory-tracking task. The results suggest that people controlling mechanical systems and people with limited movement from stroke may benefit from using muscle interfaces as an alternative input method to manual interfaces, and highlight the need for exploration of novel input techniques outside of traditional manual interfaces.
Biography: Momona Yamagami is a PhD candidate in the ECE department working with Profs. Sam Burden and Kat Steele on modeling and enhancing human-robot interactions using novel input techniques. Her research focuses on how continuous interactions can be improved for people with and without limited movement.