Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Likelihood-Free Overcomplete ICA and Applications In Causal Discovery
Authors: Chenwei DING, Mingming Gong, Kun Zhang, Dacheng Tao
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct empirical studies on both synthetic and real data to show the effectiveness of our LFOICA algorithm and its extensions to solve causal discovery problems. |
| Researcher Affiliation | Collaboration | Chenwei Ding UBTECH Sydney AI Centre School of Computer Science, Faculty of Engineering University of Sydney EMAIL Mingming Gong School of Mathematics and Statistics University of Melbourne EMAIL Kun Zhang Department of Philosophy Carnegie Mellon University EMAIL Dacheng Tao UBTECH Sydney AI Centre School of Computer Science, Faculty of Engineering University of Sydney EMAIL |
| Pseudocode | Yes | Algorithm 1 Likelihood-Free Overcomplete ICA (LFOICA) Algorithm |
| Open Source Code | Yes | Code for LFOICA can be found here |
| Open Datasets | Yes | We apply LFOICA to Sachs s data [41] with 11 variables. Here we use Temperature Ozone Data [43], which corresponds to the 49th, 50th, and 51st causal-effect pairs in the database. |
| Dataset Splits | No | The paper mentions 'cross-validation' for optimal subsampling factor but does not specify explicit train/validation/test splits with percentages or sample counts for the main experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or detailed computer specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | While the paper describes the optimization objective, loss function, and general training approach (SGD, minibatches), it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, regularization weights) or other concrete system-level training settings in the main text. |