logo EECS Rising Stars 2023




Ira Globus-Harris

Reframing Fairness: A paradigm for fair, accurate, and dynamically updatable model development



Research Abstract:

Within the responsible AI community, tools for mitigating model underperformance on disadvantaged groups have largely framed fairness as a constraint-based optimization problem: e.g., that a model ought to have the same rate of false negatives among its predictions for different subgroups of individuals. While this framework has led to a plethora of techniques, it also pits fairness in opposition to accuracy, and does not fully address how unfairness might best be mitigated if the groups in question are not easily identifiable from the data, have inherently different error rates given the features available, intersect, or if fairness concerns are only identified after a model’s deployment. My work reframes the goals of fairness to account for these discrepancies. It proposes that fairness and accuracy ought not be in tension with one another, and that in many cases, a fair model ought to do as good as possible on identifiable subgroups. It provides methods to dynamically improve models when user communities identify fairness concerns, and demonstrates the efficacy of this framework in a medium-scale deployment. Tackling the fact that models often are deployed downstream in varying use-cases, I develop tools to train flexible models which can later be deployed in a variety of contexts with different fairness objectives. Finally, I propose algorithmic techniques for identifying and improving on underperforming subgroups, and in doing so demonstrate technical connections between multicalibration and boosting.

Bio:

Ira Globus-Harris (they/them) is a fourth year computer and information sciences doctoral student at the University of Pennsylvania, advised by Michael Kearns and Aaron Roth. Their work focuses on algorithmic techniques for mitigating potential negative societal impacts of machine learning, particularly with respect to fairness. Using tools from learning theory and statistics, they reframe algorithmic fairness in order to circumnavigate the standard fairness-accuracy tradeoffs in the fairness literature. This work, which provides flexible techniques for incorporating fairness concerns after a model’s deployment, has included multiple collaborations with Amazon Web Services. Prior to graduate school, they worked at Boston University as a software engineer for the Harvard Privacy Tools Project and on the deployment of secure multiparty computation with the Boston Women’s Workforce to calculate gender and racial wage gaps. They received their bachelor of arts in mathematics and computer science from Reed College.