Natural Counterfactuals With Necessary Backtracking
Authors: GUANG-YUAN HAO, Jiji Zhang, Biwei Huang, Hao Wang, Kun Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the effectiveness of our method through empirical experiments on four synthetic datasets and two real-world datasets. |
| Researcher Affiliation | Academia | 1Chinese University of Hong Kong, 2University of California San Diego, 3Rutgers University 4Carnegie Mellon University, 5Mohamed bin Zayed University of Artificial Intelligence |
| Pseudocode | No | The paper describes the proposed method but does not include a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The code is available at https: //github.com/Guangyuan Hao/natural_counterfactuals. |
| Open Datasets | Yes | We design four simulation datasets, Toy 1-4, and use the designed SCMs to generate 10, 000 data points as a training dataset and another 10, 000 data points as a test set for each dataset. |
| Dataset Splits | No | The paper specifies training and test sets but does not explicitly mention or detail a validation set or its split for the experiments. |
| Hardware Specification | Yes | All the experiments above were run on NVIDIA RTX 4090 GPUs. |
| Software Dependencies | No | The paper mentions software components like 'normalizing flows', 'Adam W optimizer', 'V-SCM', and 'H-SCM', but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | Our training regimen for the flow-based model spanned 2000 epochs, utilizing a batch size of 100 in conjunction with the Adam W optimizer. We initialized the learning rate to 10 3, set β1 to 0.9, β2 to 0.9. |