Likelihood-Free Overcomplete ICA and Applications In Causal Discovery
Authors: Chenwei DING, Mingming Gong, Kun Zhang, Dacheng Tao
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct empirical studies on both synthetic and real data to show the effectiveness of our LFOICA algorithm and its extensions to solve causal discovery problems. |
| Researcher Affiliation | Collaboration | Chenwei Ding UBTECH Sydney AI Centre School of Computer Science, Faculty of Engineering University of Sydney cdin2224@uni.sydney.edu.au Mingming Gong School of Mathematics and Statistics University of Melbourne mingming.gong@unimelb.edu.au Kun Zhang Department of Philosophy Carnegie Mellon University kunz1@cmu.edu Dacheng Tao UBTECH Sydney AI Centre School of Computer Science, Faculty of Engineering University of Sydney dacheng.tao@uni.sydney.edu.au |
| Pseudocode | Yes | Algorithm 1 Likelihood-Free Overcomplete ICA (LFOICA) Algorithm |
| Open Source Code | Yes | Code for LFOICA can be found here |
| Open Datasets | Yes | We apply LFOICA to Sachs s data [41] with 11 variables. Here we use Temperature Ozone Data [43], which corresponds to the 49th, 50th, and 51st causal-effect pairs in the database. |
| Dataset Splits | No | The paper mentions 'cross-validation' for optimal subsampling factor but does not specify explicit train/validation/test splits with percentages or sample counts for the main experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or detailed computer specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | While the paper describes the optimization objective, loss function, and general training approach (SGD, minibatches), it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, regularization weights) or other concrete system-level training settings in the main text. |