logo EECS Rising Stars 2023




Zhijing Jin

Causal Inference for Robust, Reliable, and Responsible NLP



Research Abstract:

Despite the remarkable progress in large language models (LLMs), it is well-known that natural language processing (NLP) models tend to fit for spurious correlations, which can lead to unstable behavior under domain shifts or adversarial attacks. In my research, I develop a causal framework for robust and fair NLP, which investigates the alignment of the causality of human decision-making and model decision-making mechanisms. I develop a suite of stress tests for NLP models across various tasks such as text classification, natural language inference, and math reasoning. To achieve more robust models, I suggest techniques to align the learning direction with the underlying data generating direction, as well as assuring faithfulness to the correct causal model by enforcing constraints such as conditional independence. Furthermore, beyond technical causal NLP research, I also extend the impact of NLP by applying it to analyze the causality behind social phenomena important for our society, such as causal analysis of policies, and measuring gender bias in our society. Together, I weave a roadmap towards socially responsible NLP by ensuring the reliability of models, and broadcasting its impact to various social applications.

Bio:

Zhijing Jin (she/her) is a Ph.D. at Max Planck Institute & ETH. Her research focuses on socially responsible NLP by causal inference. Specifically, she works on expanding the impact of NLP by promoting NLP for social good, and developing CausalNLP to improve robustness, fairness, and interpretability of NLP models, as well as analyze the causes of social problems. She has published at many NLP and AI venues (e.g., ACL, EMNLP, NAACL, NeurIPS, AAAI, AISTATS). Her work has been featured in MIT News, ACM TechNews, and Synced. She is actively involved in AI for social good, as the organizer of NLP for Positive Impact Workshops at ACL 2021, EMNLP 2022, and EMNLP 2024, Moral AI Workshop at NeurIPS 2023, and RobustML Workshop at ICLR 2021. To support the NLP research community, she organizes the ACL Year-Round Mentorship Program. To foster the causality research community, she organized the Tutorial on CausalNLP at EMNLP 2022, and served as the Publications Chair for the 1st conference on Causal Learning and Reasoning (CLeaR). More information can be found on her personal website: zhijing-jin.com