logo EECS Rising Stars 2023




Anqi Li

Exploiting Structure in Learning: A Path Toward Building Safe and Adaptive Robots



Research Abstract:

My research goal is to build robots that have safety and performance guarantees. The main perspective that I take is to leverage various types of structure in learning, ranging from explicit domain knowledge, such as dynamics, success conditions, failure modes, etc., to hidden structure in data. These structures can provide valuable insight in solving the underlying problem, but cannot be directly used by most existing learning algorithms. My research has explored various type of structure that can be used to build robots that learn better. First, I show that known problem structure, e.g., dynamics, task decompositions, etc., can be encoded in policy classes for learning, giving robots the ability to admit formal safety guarantees, learn efficiently, and generalize well. Second, by reasoning about the structure of data in robotics problems, in terms of what information it provides, we can discover new learning formulations and algorithms, which have theoretical guarantees and work well empirically. Finally, I show that robotics data can have implicit structure inherited from its collection process. Moreover, we can leverage such implicit structure to achieve desirable properties that sound almost too good to be true. For example, we can learn good policies using offline reinforcement learning without a reward, and we can learn inherently safe policies for robots without seeing any unsafe data.

Bio:

Anqi Li is a Ph.D. student in the Paul G. Allen School of Computer Science and Engineering (CSE) at the University of Washington. She moved to the University of Washington from the Georgia Institute of Technology, where she was a Robotics Ph.D. student. She received her Master's degree in Robotics from Carnegie Mellon University. Her research focuses on bringing formal performance guarantees and sample efficiency to learning. Specific research topics include offline reinforcement learning (RL), safe RL, learning stable policies, learning from human demonstrations, and planning & control with guarantees. Her research was supported by the NVIDIA Graduate Fellowship and the Siebel Scholarship. She was selected as an RSS Pioneer and was a recipient of the 2022 RA-L best paper award.