PhD Proposal: From Demonstration to Dynamic Interaction: Enabling Long-Term Robotic Planning

Talk
Mara Levy
Time: 
08.21.2024 10:30 to 12:30
Location: 

IRB IRB-4105

Research in robotic learning has exploded over the last decade. As machine learning techniques have improved the possibility of making robots work in the real world has grown. Research in robotic learning is typically concentrated into two fields, reinforcement learning and imitation learning. Both of these approaches are plagued with issues ranging from data availability to accurate state representation. This thesis will focus on how we can push these methods towards working in the wild.To start we will discuss how we can redefine state representation. While the work in this paper discussed representation of human state similar techniques can be used for robot state. Additionally, this method could be used in the future for equating human state to robot state. Our results show a significant improvement over current methods when generalizing to unseen states and camera viewpoints.Next, we will focus on how to learn a generalized task from a single demonstration without requiring hand made reward functions. Despite using 100X less data than other approaches our method is able to achieve the same final performance level. Finally, we will show how these methods can be deployed in a dynamic world despite being trained in a static environment. By implementing a simple planner on top of a pretrained policy we show a significant improvement over the brute force approach and approach the oracle success rate.