Truncated Matrix Power Iteration for Differentiable DAG Learning

Authors: Zhen Zhang, Ignavier Ng, Dong Gong, Yuhang Liu, Ehsan Abbasnejad, Mingming Gong, Kun Zhang, Javen Qinfeng Shi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our DAG learning method outperforms the previous state-of-the-arts in various settings, often by a factor of 3 or more in terms of structural Hamming distance.We compare the performance of different DAG learning algorithms and DAG constraints to demonstrate the effectiveness of the proposed TMPI based DAG constraint. We provide the main experimental results in this section, and further empirical studies can be found in Appendix C.
Researcher Affiliation Academia 1The University of Adelaide 2Carnegie Mellon University 3The University of New South Wales 4The University of Melbourne 5Mohamed bin Zayed University of Artificial Intelligence
Pseudocode Yes Algorithm 1 Truncated Matrix Power Iteration (TMPI)Algorithm 2 Fast TMPI
Open Source Code Yes The source code is avaiable at here.
Open Datasets Yes We first conduct experiments on synthetic datasets using similar settings as previous works [23, 46, 50]. In summary, random Erdös-Rényi graphs [14] are generated with d nodes and kd expected edges (denoted by ERk).We also run an experiment on the Protein Signaling Networks dataset [31] consisting of n = 7466 samples with d = 11 nodes.
Dataset Splits No The paper describes generating 'n i.i.d. samples' and running '100 random simulations' but does not specify explicit train/validation/test dataset splits with percentages or sample counts for the generated data.
Hardware Specification No The paper does not specify any particular hardware details such as GPU models, CPU types, or memory used for running the experiments. It refers to Appendices for compute details, which are not provided in the given text.
Software Dependencies No The paper mentions software like 'Causal Discovery Toolbox package [16]' and 'L-BFGS [7]', but it does not provide specific version numbers for these or other software dependencies like programming languages or libraries.
Experiment Setup Yes For MSE loss based DAG learning methods (9), we set = 0.1 and use the same strategy as Zheng et al. [50] to update the parameters and . For likelihood loss based DAG learning methods (10), we set 1 = 0.02 and 2 = 5.0 as Ng et al. [23]. For the Binomial DAG constraints (4), the parameter is set to 1/d as Yu et al. [45]. For our TMPI DAG constraint, we set the parameter = 10 6.