Samuel Lippl
Affiliation
Center for Theoretical Neuroscience, Columbia University
Talk Title
When does compositional structure yield compositional generalization? A kernel theory
Abstract
Compositional generalization (the ability to respond correctly to novel combinations of familiar components) is thought to be a cornerstone of intelligent behavior. Compositionally structured (e.g. disentangled) representations support this ability; however, the conditions under which they are sufficient for the emergence of compositional generalization remain unclear. To address this gap, we present a theory of compositional generalization in kernel models with fixed, compositionally structured representations. This provides a tractable framework for characterizing the impact of training data statistics on generalization. We find that these models are limited to functions that assign values to each combination of components seen during training, and then sum up these values ("conjunction-wise additivity"). This imposes fundamental restrictions on the set of tasks compositionally structured kernel models can learn, in particular preventing them from transitively generalizing equivalence relations. Even for compositional tasks that they can learn in principle, we identify novel failure modes in compositional generalization (memorization leak and shortcut bias) that arise from biases in the training data. Finally, we empirically validate our theory, showing that it captures the behavior of deep neural networks (convolutional networks, residual networks, and Vision Transformers) trained on a set of compositional tasks with similarly structured data. Ultimately, this work examines how statistical structure in the training data can affect compositional generalization, with implications for how to identify and remedy failure modes in deep learning models.
Bio
I am a PhD student at the Center for Theoretical Neuroscience at Columbia University, advised by Kimberly Stachenfeld and Larry Abbott. My research lies at the intersection of theoretical neuroscience and machine learning. On the neuroscience side, I study how humans and animals use compositional and structural rules to generalize their behavior to situations they have never experienced before. On the machine learning side, I investigate how artificial learning systems perform when facing such novel situations, where they fall short, and how we can improve their performance. I pursue this question using a mixture of theoretical and empirical approaches, in particular drawing on methods in deep learning theory.