Structural Inference with Dynamics Encoding and Partial Correlation Coefficients

Authors: Aoran Wang, Jun Pang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method through extensive experimentation on twenty datasets from a benchmark dataset and biological networks. Our results showcase the superior scalability, accuracy, and versatility of our proposed approach compared to existing methods. Moreover, experiments conducted on noisy data affirm the robustness of our method.
Researcher Affiliation Academia Aoran Wang 1 & Jun Pang 1,2 1 Faculty of Science, Technology and Medicine, University of Luxembourg 2 Institute for Advanced Studies, University of Luxembourg
Pseudocode Yes Appendix A.1 presents the pseudocode outlining the PCOR method used in this work, while Appendix A.6 offers the pseudo-code for the complete SIDEC pipeline.
Open Source Code Yes Additionally, the code can be found at https://github.com/wang422003/SIDEC_torch.
Open Datasets Yes The benchmark we employed for evaluation, Struct Infer (Anonymous, 2023), provided a comprehensive set of datasets specifically tailored for structural inference tasks. ... In addition to the Struct Infer datasets, similar to previous works (Wang & Pang, 2022; Wang et al., 2023a) we conducted evaluations on six directed synthetic biological networks (Pratapa et al., 2020) ... We have conducted additional experiments using three widely recognized public traffic network datasets: PEMS03, PEMS04, and PEMS07 (Song et al., 2020).
Dataset Splits Yes Our data split followed the predefined training, validation, and test sets within the Struct Infer benchmark. ... Trajectories were randomly divided into training, validation, and test sets in a ratio of 8:2:2.
Hardware Specification Yes All experiments were conducted on a single NVIDIA Tesla V100 SXM2 32G graphics card, paired with two Xeon Gold 6132 @ 2.6GHz CPUs.
Software Dependencies No The paper mentions using "Py Torch-VAE (Subramanian, 2020)", "Scipy (Virtanen et al., 2020)", and "sklearn (Pedregosa et al., 2011)" by citing their respective publications, but it does not specify explicit version numbers for these software libraries or other key dependencies like Python or CUDA.
Experiment Setup Yes The deep learning methods were trained for a maximum of 600 epochs, with batch sizes, learning rates, and hyperparameters configured according to their respective original implementations. ... The batch size is set as 256... The learning rate is set as 0.0005. The maximum epochs for training is 500.