What Makes Models Compositional? A Theoretical View

Authors: Parikshit Ram, Tim Klinger, Alexander G. Gray

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we seek to theoretically understand the role the compositional structure of the models plays in these failures and how this structure relates to their expressivity and sample complexity. We propose a general neuro-symbolic definition of compositional functions and their compositional complexity. We then show how various existing general and special purpose sequence processing models (such as recurrent, convolution and attention-based ones) fit this definition and use it to analyze their compositional complexity. Finally, we provide theoretical guarantees for the expressivity and systematic generalization of compositional models that explicitly depend on our proposed definition and highlighting factors which drive poor empirical performance.
Researcher Affiliation Collaboration Parikshit Ram1 , Tim Klinger1 and Alexander G. Gray2,3 1IBM Research 2Centaur AI Institute 3Purdue University parikshit.ram@ibm.com, tklinger@us.ibm.com, skirmilitor@gmail.com
Pseudocode No The paper uses figures to illustrate cDAGs and defines functions mathematically, but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'while the supplementary material for this submission [Ram et al., 2024] can be found at https://www.arxiv.org/abs/2405.02350.' This links to a preprint, not a code repository. There is no explicit statement about releasing source code for the methodology described.
Open Datasets No The paper is theoretical and does not conduct empirical experiments that involve training on a dataset. It mentions benchmarks (e.g., SCAN, CFQ, COGS) in the introduction to provide context, but does not use them for its own analysis or present dataset access information relevant to its own work.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments, thus no training/validation/test splits are mentioned for its own work.
Hardware Specification No The paper is theoretical and does not report on running experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not report on implementations or experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with specific hyperparameters or training configurations.