logo EECS Rising Stars 2023




Ran Liu

Generalizable and Interpretable Representation Learning for Neuroscience



Research Abstract:

Machine learning (ML) methods have significantly advanced the fields of neuroscience. However, I believe that conventional ML methods, which are tailored to specific populations and tasks, are no longer adequate for comprehending large-scale, multimodal, and multitask neural data. My research focuses on building the next-generation ML tools for neuroscience that are generalizable and interpretable. Robust representation learning strategies for neuroscience typically possess two attributes: generalizability, enabling applications across diverse subjects and tasks; and interpretability, facilitating seamless integration with domain knowledge. Focused on these aspects, my previous research spanned from investigating functional communication to exploring the structural composition of the brain through neural activity and neuroimaging analysis. I initially delved into developing methods that can effectively adapt to diverse subjects and tasks. To achieve this, I built generative models that directly model the population/subject variation within neural data, allowing transferability across subjects. I further explored label-free learning approaches to build self-supervised frameworks that can excel a diverse range of downstream tasks. In collaboration with neuroscientists, I investigated how to effectively incorporate domain knowledge into ML architectures. With such objective, I developed frameworks that interpret regions of interest across multiple scales, and designed architectures to directly decode neuron subgroups to learn from a compositional standpoint. Looking ahead, the scalability of large-scale models gives unforeseen possibilities for novel neural applications such as brain-computer interfaces. To achieve this, I will investigate generalizable systems that bridge brain activity with various modalities, and also explore better test-time alignment techniques to interpret large-scale models for neural tasks. My research endeavors will foster transformative outcomes for both neuroscience and ML communities, driving the progress of ML in scientific pursuits.

Bio:

Ran Liu is a fifth-year PhD candidate in the Machine Learning Program at Georgia Tech. She conducts her research in the Neural Data Science Lab, under the guidance of Prof. Eva L. Dyer. Ran's research interests lie at the intersection of Machine (Deep) Learning and Computational Neuroscience. She is driven by the curiosity of developing generalizable and interpretable deep learning tools to accelerate scientific discovery, as well as understanding brain organization and functions to create more flexible AI systems. Her works have appeared in several top machine learning conferences, with oral presentations at NeurIPS. She is also a recipient of the Cox Fellowship and NSF CloudBank Fund. She has held internships in Apple AIML Research, Cajal Neuroscience, and Meta during her PhD.