Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Disentangled Representation Learning in Non-Markovian Causal Systems

Authors: Adam Li, Yushu Pan, Elias Bareinboim

NeurIPS 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theory is corroborated by experiments.
Researcher Affiliation Academia Adam Li and Yushu Pan and Elias Bareinboim Causal Artificial Intelligence Lab Columbia University EMAIL
Pseudocode Yes Algorithm 1 CRID: Algorithm for determining causal representation identifiability
Open Source Code Yes The code we used to run experiments is here: https://github.com/tree1111/CDRL.
Open Datasets No The paper uses synthetic data generated according to latent causal diagrams and mentions 'Color MNIST with bar data generation' but does not provide concrete access information (link, DOI, formal citation) for a publicly available dataset.
Dataset Splits No The paper mentions generating 200,000 data points but does not specify explicit training, validation, or test split percentages or counts for reproducibility.
Hardware Specification Yes We use NVIDIA H100 GPUs to train the neural network models.
Software Dependencies No The paper mentions using 'ADAM optimizer [84]' and 'Neural Spline Flows [83]' but does not provide specific version numbers for these or other software libraries.
Experiment Setup Yes We use the ADAM optimizer [84].We start with a learning rate of 1e-4. We train the model for 200 epochs with a batch size of 4096.