Constraint-Free Structure Learning with Smooth Acyclic Orientations

Authors: Riccardo Massidda, Francesco Landolfi, Martina Cinquini, Davide Bacciu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to being asymptotically faster, our empirical analysis highlights how COSMO performance on graph reconstruction compares favorably with competing structure learning methods.
Researcher Affiliation Academia Riccardo Massidda, Francesco Landolfi, Martina Cinquini, Davide Bacciu Department of Computer Science Università di Pisa, Italy {riccardo.massidda,francesco.landolfi,martina.cinquini}@phd.unipi.it davide.bacciu@unipi.it
Pseudocode No The paper describes methods textually and with equations, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes For reproducibility purposes, we release the necessary code to replicate our experiments. [...] https://github.com/rmassidda/cosmo
Open Datasets Yes We base our empirical analysis on the testbed originally introduced by Zheng et al. (2018) and then adopted as a benchmark by all followup methods. [...] We include in our code the exact data generation process from the original implementation of NOTEARS.2
Dataset Splits No Then, we test each configuration on five randomly sampled DAGs. We select the best hyperparameters according to the average AUC value. Finally, we perform a validation step by running the best configuration on five new random graphs.
Hardware Specification Yes We run all the experiments on our internal cluster of Intel(R) Xeon(R) Gold 5120 processors, totaling 56 CPUs per machine.
Software Dependencies No The paper mentions using Py Torch for automatic differentiation but does not provide a specific version number. No other software components or libraries are listed with version details.
Experiment Setup Yes For COSMO, we interrupt the optimization after 2000 epochs. For the non-linear version of DAGMA, we increased the maximum epochs to 7000. [...] In particular, we sampled the learning rate from the range (1e-4, 1e-2) and the regularization coefficients from the interval (1e-4, 1e-1). [...] For COSMO, we sample hyperparameters from the ranges in Table 4.