IDYNO: Learning Nonparametric DAGs from Interventional Dynamic Data

Authors: Tian Gao, Debarun Bhattacharjya, Elliot Nelson, Miao Liu, Yue Yu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the promising performance of our method on synthetic benchmark datasets against state-of-the-art baselines. In addition, we show that the proposed method can more accurately learn the underlying structure of a sequential decision model, such as a Markov decision process, with a fixed policy in typical continuous control tasks. ... 4. Empirical Evaluation
Researcher Affiliation Collaboration 1IBM Research, Yorktown Heights, NY, USA 2Department of Mathematics, Lehigh University, Bethlehem, PA, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link to a source code repository or an unambiguous statement that the code for their method is being released.
Open Datasets Yes We use the continuous version of the Lunar Lander environment in Open AI Gym (Brockman et al., 2016), as DYNOTEARS handles only continuous variables.
Dataset Splits Yes For the hyperparameter values in DYNOTEARS and our proposed method, we use a separate validation dataset to choose the best performing hyperparameters for each method per SHD.
Hardware Specification Yes All experiments are done in Python on a machine with 3.7GHz CPU and 16GB memory.
Software Dependencies No The paper mentions using Python for experiments and refers to 'stable baselines 3' and 'Tetrad' for implementations, but it does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes For the hyperparameter values in DYNOTEARS and our proposed method, we use a separate validation dataset to choose the best performing hyperparameters for each method per SHD. We search for the best value of each of 4 parameters sequentially, including λa, λw, the threshold to obtain final W(1) and A(1), and the hidden neuron size. For λa and λw, we search over a value range of {10 5, 10 4, 10 3, 10 2, 10 1, 100, 101, 102}. Graph threshold search range is set to be {0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.2, 0.3}, and neuron size range is searched over {5, 10, 30, 50}.