Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Properties from mechanisms: an equivariance perspective on identifiable representation learning

Authors: Kartik Ahuja, Jason Hartford, Yoshua Bengio

ICLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present our initial experiments on 3d Ident dataset from [Zimmerman et al. 2021], using the contrastive loss described above. With contrastive pairs generated by a (fixed) random orthogonal matrix U applied to the latents, we obtain the following values for linear disentanglement score (R2 of the predictions of the true representation using a linear model). We report median scores over 10 seeds.
Researcher Affiliation Academia Kartik Ahuja , Jason Hartford & Yoshua Bengio Mila Quebec AI Institute, Universit e de Montr eal Quebec, Canada EMAIL
Pseudocode No The paper includes mathematical formulations and proofs but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions experiments conducted on a '3d Ident dataset from [Zimmerman et al. 2021]' but does not state that the authors' own code for the methodology is open-source or provide a link to it.
Open Datasets Yes We present our initial experiments on 3d Ident dataset from [Zimmerman et al. 2021]
Dataset Splits No The paper mentions 'We report median scores over 10 seeds.' but does not specify any training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not specify any hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No The paper does not provide any specific software dependencies with version numbers.
Experiment Setup No While the paper discusses loss functions (Section A.12), it does not provide specific experimental setup details such as hyperparameter values (learning rates, batch sizes, number of epochs) or model initialization for the preliminary experiments mentioned.