logo EECS Rising Stars 2023




Isabel Papadimitriou

Understanding large language models



Research Abstract:

Large language models exhibit remarkable language learning capabilities, and as language models gain more social power it’s crucial for us to understand how these capabilities emerge. I develop diverse new methods to uncover how language models can learn complex, abstract structures from self-supervised training, and empirically demonstrate what this means for the linguistic abilities and limitations of language models. My methodologies are based on my hypotheses that (1) abstract structure transcends modalities (such as language, music, and code) and (2) abstract structure is shared between different natural languages. My experiments test cross-modal and cross-linguistic transfer in language models, and reveal how models learn complex abstractions from self-supervised training on their training data. My approach is interdisciplinary, leveraging mathematical, computational, and cognitive theories of structure and learning. I developed the methodology of cross-modal structural transfer, and showed that language models can learn structure that transcends one modality. With controlled experiments that disentangle the surface forms of language, music, code, and formal languages, I show that language models can utilize structure from one modality in order to model another (EMNLP 2020, EMNLP 2023). I further developed cross-lingual analysis methodologies to showcase the representation of syntactic structure in language models, and show how models represent system-wide structural differences between languages (EACL 2021). I demonstrated how use of these structural systems allow models to produce coherent novel outputs (ACL 2022), as well as how models are biased towards representing the syntax of English, which is dominant in training data (EACL 2023). My research aims to utilize cognitive and language-scientific analyses to understand language models, and also utilize the experimental breadth afforded language models to understand the structure and representation of human language.

Bio:

Isabel Papadimitriou is a final-year PhD student at the Stanford Natural Language Processing group, where she is advised by Dan Jurafsky. She works on understanding language learning and representation in humans and machines. Her research focuses on understanding how large language models learn language structure from self-supervised training, and on utilizing large language models to empirically understand human language learning and representation. She has received the NSF Graduate Research Fellowship and the Stanford Graduate Fellowship.