logo EECS Rising Stars 2023




Zhe Zeng

Probabilistic Reasoning and Learning with Symbolic Knowledge



Research Abstract:

To effectively and reliably deploy AI systems in real-world scenarios, it is crucial for the models to seamlessly integrate domain-specific knowledge—a capability that remains elusive for most contemporary machine learning methodologies. My research goal is to power the next-generation machine learning models by integrating symbolic knowledge, which lies at the intersection of AI and formal methods. My methodology is to represent the knowledge in the form of constraints and systematically incorporate diverse forms of constraints into probabilistic reasoning and learning processes, by addressing two fundamental challenges faced by the current research: 1) the non-differentiability of constraints due to their combinatorial nature prevents the integration into the gradient-based training; 2) the intractability of constraint probability poses computational challenges to the inference process. I have developed general-purpose algorithms for end-to-end training under constraints by leveraging constraint probabilities and deployed them to scientific applications. Further, my research has studied the theoretical foundations for tractable reasoning and pioneered a framework called weighted model integration that supports advanced reasoning under expressive constraints. My future research aims to enable the injection of knowledge into machine learning models and conversely, the extraction of knowledge from data by models, through rigorous mathematical study and fundamental improvements to inference and learning, and to broaden their impacts for scientific discoveries.

Bio:

Zhe Zeng is a final-year CS Ph.D. student at the University of California, Los Angeles. She is advised by Professor Guy Van den Broeck. Her research interest lies in Artificial Intelligence and machine learning in general, with a special focus on probabilistic machine learning and neuro-symbolic AI. Her research investigates how to enable machine learning models to incorporate symbolic knowledge into probabilistic reasoning and learning in a principled way. She received an Amazon Doctoral Student Fellowship in 2022 and an NEC Doctoral Student Fellowship in 2021. Previously, she interned at IBM Research Center and Yahoo! Research.