Consistency of Neural Causal Partial Identification

Authors: Jiyuan Tan, Jose Blanchet, Vasilis Syrgkanis

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we examine the performance of our algorithm in two settings. We compare our algorithm with the Autobounds algorithm [14] in a binary IV example in [14] and in a continuous IV model 1.The experiments are repeated 10 times for binary IV and 50 times for continuous IV.Table 1: Experiment results of 2 IV settings.
Researcher Affiliation Academia Jiyuan Tan Management Science and Engineering Stanford University Stanford, CA 94305 jiyuantan@stanford.edu Jose Blanchet Management Science and Engineering Stanford University Stanford, CA 94305 jose.blanchet@stanford.edu Vasilis Syrgkanis Management Science and Engineering Stanford University Stanford, CA 94305 vsyrgk@stanford.edu
Pseudocode No The paper describes methods and architectures but does not contain any structured pseudocode or algorithm blocks (e.g., labeled 'Algorithm' or 'Pseudocode').
Open Source Code Yes 1The code can be found in https://github.com/Jiyuan-Tan/Neural Partial ID
Open Datasets No The paper describes generating data from structural equations for its experiments (e.g., 'We consider the noncompliance binary IV example in [14, Section D.1].' and defines 'structure equations of Mλ' for the continuous IV setting), but does not provide a direct link, DOI, or specific repository name for a publicly available or open dataset.
Dataset Splits No The paper mentions overall sample sizes (e.g., 'The sample size is taken to be 5000 in each experiment') and the number of repetitions, but it does not provide specific training, validation, or test dataset splits (e.g., percentages, absolute counts, or predefined split citations).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions the use of the 'geomloss' package ('The "geomloss" package [16] is used to calculate the Sinkhorn distance') but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes We use three-layer feed-forward neural networks with width 128 for each ˆfi and six-layer neural networks with width 128 for each ˆgj. We use the Augmented Lagrangian Multiplier (ALM) method to solve the optimization problems as in [3]. We run 600 epochs and use a batch size of 2048 in each epoch. mn is set to be mn = n. The 'geomloss' package [16] is used to calculate the Sinkhorn distance. To impose Lipschitz regularization, we use the technique from [23] to do layer-wise normalization to the weight matrices with respect to infinity norm. The upper bound of the Lipschitz constant in each layer is set to be 8. The τ in the Gumbel-softmax layer is set to be 0.