logo EECS Rising Stars 2023




Angelina Wang

Sociotechnically Grounded Responsible Machine Learning



Research Abstract:

With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as risk of harm. This problem requires sociotechnically grounded approaches to thoughtfully confront. However, this is hard as technical approaches tend to overfit to mathematical definitions of fairness which may correlate poorly to the real-world construct of fairness, and normative approaches may be too abstract to translate well into practice. In my research, I approach complex ML fairness problems in sociotechnically grounded ways. I re-orient technical approaches in machine learning fairness to more closely align with normative desires, conduct interdisciplinary collaborations to draw upon the long history of studying these problems outside of computer science, and engage with real-world deployments of AI and practitioners.

Bio:

Angelina Wang is a PhD student at Princeton University advised by Olga Russakovsky. Her research is in the area of machine learning fairness and algorithmic bias. Her work has been published at ICML and AAAI (machine learning), ICCV and IJCV (computer vision), FAccT and JRC (responsible computing), and Big Data & Society (interdisciplinary), and she has received the National Science Foundation Graduate Student Fellowship. Previously, she has interned with Microsoft Research’s FATE (Fairness, Accountability, Transparency & Ethics in AI) team, and received a B.S. in Electrical Engineering and Computer Science from UC Berkeley.