COAT: Measuring Object Compositionality in Emergent Representations

Authors: Sirui Xie, Ari S Morcos, Song-Chun Zhu, Ramakrishna Vedantam

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on the popular CLEVR (Johnson et.al., 2018) domain reveal that existing disentanglement-based generative models are not as compositional as one might expect, suggesting room for further modeling improvements.
Researcher Affiliation Collaboration 1Department of Computer Science, UCLA 2Fundamental AI Research (FAIR, Meta Inc.) 3Department of Statistics, UCLA.
Pseudocode No The paper describes methods and algorithms in narrative text, such as the 'Greedy Matching Algorithm,' but does not present any content explicitly labeled as 'Pseudocode' or 'Algorithm,' nor does it include structured code blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology, nor does it provide a link to a code repository.
Open Datasets Yes Our experiments on the popular CLEVR (Johnson et.al., 2018) domain
Dataset Splits No The paper mentions 'IID dataset' and 'highly correlated training dataset' and refers to a 'Train Set' in Table 2, but does not provide specific details on the dataset splits (e.g., percentages or sample counts) for training, validation, or testing.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x), which would be necessary to replicate the environment.
Experiment Setup Yes All models are trained with the default architectures and hyperparameters except that in β-TC-VAE we use latent dimension 256, and use the same encoder for Mo Co.