logo EECS Rising Stars 2023




Berivan Isik

Private and Communication-Efficient Federated Learning



Research Abstract:

In recent years, we have witnessed the success of artificial intelligence and machine learning across a variety of applications, such as face recognition, language processing, and autonomous driving. Two major factors in this success are the increasingly complex over-parameterized nature of deep neural network models and the availability of a tremendous amount of user data. This brings emerging concerns around (1) training, inference, and storage of over-parameterized models on edge devices with limited compute, memory, and bandwidth power and (2) the privacy of users who contribute to these models with their personal data. I will discuss some of my results on making decentralized learning private, communication-efficient, and compute-efficient.

Bio:

Berivan Isik is a PhD student at Stanford University, co-advised by Tsachy Weissman and Sanmi Koyejo. Her research focuses on scalable and trustworthy machine learning, federated learning, model compression, differential privacy, and information theory. She was three times a research intern at Google, an applied scientist intern at Amazon, and a visiting researcher in Nicolas Papernot’s lab at Vector Institute for Artificial Intelligence. She has organized three ICML workshops: ICML-21 Workshop on Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning; ICML-21 Women in Machine Learning Workshop; and ICML-23 Workshop on Neural Compression: From Information Theory to Applications. She is the recipient of the 3-year Stanford Graduate Fellowship and 3-year Google PhD Fellowship, and she has been selected as a Rising Star in EECS in 2023.