logo EECS Rising Stars 2023




Anqi Mao

H-consistency bounds: Theoretical analysis and applications



Research Abstract:

The loss functions optimized by learning algorithms are often distinct from the original one specified for a task. This is typically because optimizing the original loss is computationally intractable. What non-asymptotic guarantees can we rely on when minimizing the surrogate loss with a restricted hypothesis set, such as a family of linear models or neural networks? 1) We proposed and derived novel non-asymptotic learning guarantees for binary classification, H-consistency bounds, which account for the hypothesis set H adopted and are more significant and informative than existing Bayes-consistency guarantees in the literature. 2) We gave a series of new H-consistency bounds for multi-class surrogate losses, including max losses, sum losses, constrained losses and comp-sum losses. 3) We provided general characterizations and extensions of H-consistency bounds. 4) We also developed hypothesis set-dependent consistency theory in other settings such as adversarial robustness, ranking, structured prediction, learning with abstentions and deferral, etc.

Bio:

Anqi Mao is a Ph.D. Candidate in Mathematics at the Courant Institute of Mathematical Sciences, New York University, advised by Prof. Mehryar Mohri. Her main research interests are fundamental learning theory and algorithms. She has worked on several areas, including consistency theory, adversarial robustness, differential privacy, learning with abstentions and deferral, ranking and structured prediction, aiming at using theoretical insights to develop principled algorithms that can handle real-world problems. She has also spent a fair amount of time as a research intern (Summer 2021) and student researcher (Fall 2021 - Spring 2024) at Google Research. She has been honored to receive the Rising Stars in EECS, Sandra Bleistein Prize and the MacCracken GSAS Doctoral Fellowship.