Robotics Research Group
Various Presenters (Allen School)
Colloquium
Thursday, December 1, 2022, 3:30 pm
Abstract
Presenters:
Abhishek Gupta;
Taylor Kessler Faulkner;
Zoey Chen;
Adam Fishman;
Boling Yang;
Vinitha Ranganeni
Speaker: Abhishek Gupta
Title: How to Train Your Robot - An Overview of Research in the WEIRD Lab
The Washington Embodied Intelligence and Robotics Lab (WEIRD), aims to get robots to learn behaviors via reinforcement learning directly in real world environments to enable continuous improvement with experience. This work touches elements of reward specification, pretraining for robotics, continual learning for sequential decision making and ongoing work touches on elements of human in the loop learning, policy generalization and robustness to distribution shift in environments with and around people. I will be presenting on some of the research that has and is being done by members of the lab.
Speaker Bio: Abhishek Gupta is an assistant professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. He is interested in research directions that enable directly performing reinforcement learning directly in the real world — reward supervision in reinforcement learning, large scale real world data collection, learning from demonstrations, and multi-task reinforcement learning. He received his PhD at UC Berkeley working with Pieter Abbeel and Sergey Levine, where he is interested in algorithms that can leverage reinforcement learning algorithms to solve robotics problems. Subsequently, he spent a year as a postdoctoral fellow at MIT, working with Russ Tedrake and Pulkit Agrawal. He has also spent time at Google Brain. He is a recipient of the NDSEG and NSF graduate research fellowships, and several of his works have been presented as spotlight presentations at top-tier machine learning and robotics conferences. A more detailed description can be found at https://homes.cs.washington.edu/~abhgupta
====
Speaker: Taylor Kessler Faulkner
Title: The Personal Robotics Lab: Robots that perceive, adapt, and assist
The Personal Robotics Lab (PRL), led by Prof.
Siddhartha Srinivasa, aims to get robots performing complex tasks and
helping people in the wild. Our mission is to develop the fundamental
building blocks of perception, manipulation, learning, and human-robot
interaction to enable robots to perform complex physical manipulation
tasks under clutter and uncertainty with and around people. I will be
presenting on some of the exciting research currently being done by
the students and postdocs in our lab on autonomous vehicles, complex
manipulation tasks, and assistive robotics.
Speaker Bio: Taylor Kessler Faulkner is a postdoctoral scholar and UW
Data Science Postdoctoral Fellow in Siddhartha Srinivasa's Personal
Robotics Lab at the University of Washington. She graduated from UT
Austin in August 2022 with a PhD in Computer Science, where she worked
with Prof. Andrea Thomaz in the Socially Intelligent Machines Lab.
Taylor's research enables robots to learn from imperfect human
teachers using interactive reinforcement learning. People may not
fully understand how robots should complete a task, or they may not
have long periods of time available to advise learning robots. Her
goal is to create algorithms that allow robots to learn from these
potentially inaccurate or inattentive teachers.
====
Speaker: Zoey Chen
Title: Learning Robust Real-World Dexterous Grasping Policies via Implicit Shape Augmentation
Dexterous robotic hands have the capability to interact with a wide variety of household objects to perform tasks like grasping. However, learning robust real world grasping policies for arbitrary objects has proven challenging due to the difficulty of generating high quality training data. In this work, we propose a learning system (ISAGrasp) for leveraging a small number of human demonstrations to bootstrap the generation of a much larger dataset containing successful grasps on a variety of novel objects. Our key insight is to use a correspondence-aware implicit generative model to deform object meshes and demonstrated human grasps in order to generate a diverse dataset of novel objects and successful grasps for supervised learning, while maintaining semantic realism. We use this dataset to train a robust grasping policy in simulation which can be deployed in the real world. We demonstrate grasping performance with a four-fingered Allegro hand in both simulation and the real world, and show this method can handle entirely new semantic classes on grasping unseen objects in the real world.
Speaker Bio: Zoey Chen is a Ph.D. student in the Paul G. Allen School of Computer Science and Engineering, working with Prof. Dieter Fox and Prof. Abhishek Gupta. Her research interests are imitation learning and robot manipulation. She is particularly interested in learning to generate diverse data for robots such that learning from minimal demonstrations is possible.
====
Speaker: Adam Fishman
Title: Creating Collision-Free Motion with Data
Collision-free motion generation in unknown environments is a core building block for robot manipulation. Whatever the end-task, a robot arm should move smoothly, following a short, natural-seeming path to the target. Calculating this path must be fast enough for real-time performance and reliable enough for safe operation. And, when the environment changes, the robot should quickly adapt to avoid any new obstacles that may have appeared. In this talk, I will discuss our lab’s research into leveraging machine learning to create a single end-to-end motion generation system. Our recent publication Motion Policy Networks (MπNets) presents a neural architecture that can generate collision-free, smooth motion from just a single depth camera observation. MπNets are trained on over 3 million motion planning problems in more than 500,000 simulated environments, yet they transfer well to hardware systems with non-stationary obstacles and noisy partial point clouds.
Speaker Bio: Adam Fishman is a 5th year Ph.D. Student in the Allen School, co-advised by Professors Dieter Fox and Byron Boots. Adam's research uses machine learning and data to address the combined challenge of perception, planning, and control for complex long-horizon robotic tasks. His goal is to develop algorithms to make designing robotic applications simpler, easier, and faster for robot engineers and operators.
====
Speaker: Boling Yang
Title: Robot Learning in Competitive Games
Competition is one of the most common forms of human interaction, but there has only been limited discussion of competitive interaction between a robot and other embodied agents, such as another robot or even a human. In this presentation, we will share our research on
robot learning in competitive settings in the context of the following two applications: 1. Human-Robot Interaction -- A competitive robot can serve a variety of positive roles, including motivating human users and inspiring their potential in certain scenarios, such as sports and physical exercise. 2. Dexterous Manipulation -- We will discuss our ongoing efforts on robot manipulation for densely packed containers via competitive training.
Speaker Bio: Boling Yang is a Ph.D. student in the Paul G. Allen School of Computer Science and Engineering, co-advised by Prof. Joshua Smith and Prof. Byron Boots. His current research focuses on learning strategic and agile robot behaviors via multi-agent reinforcement learning. Previously, Boling received his B.S. and M.S. degrees in Electrical Engineering from the University of Washington.
====
Speaker:Vinitha Ranganeni
Title: Accessible Teleoperation Interfaces for Assistive Robots
Abstract: 4 million people in the U.S. currently have an independent living disability. Deploying teleoperated assistive robots in the home can enable people with motor impairments to complete a range of activities of daily living (ADLs) more independently. Our goal is to create interfaces that simplify teleoperation. However, creating a single interface that accommodates all mobility impairments and preferences is difficult. In this work we explore how customization can help people with mobility impairments adapt the interface to their needs and preferences. Additionally, we present a case study where we deployed the robot in an end-user's home.
Speaker Bio: Vinitha Ranganeni is a PhD student in the Paul G. Allen School of Computer Science and Engineering advised by Maya Cakmak. Vinitha's research interests lie in human-robot interaction and assistive robots. More specifically, she is interested in building robotic systems that are accessible to people with different abilities and can deployed in real homes in the future.
====