DAG Learning on the Permutahedron

Authors: Valentina Zantedeschi, Luca Franceschi, Jean Kaddour, Matt Kusner, Vlad Niculae

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate, on real-world data problems in protein-signaling and transcriptional network discovery, that our approach lies on the Pareto frontier of two key metrics, the SID and SHD.
Researcher Affiliation Collaboration Valentina Zantedeschi Service Now Research vzantedeschi@gmail.com Luca Franceschi Amazon Web Services franuluc@amazon.de Jean Kaddour University College London, Centre for AI jean.kaddour.20@ucl.ac.uk Matt J. Kusner University College London, Centre for AI m.kusner@ucl.ac.uk Vlad Niculae Informatics Institute, University of Amsterdam v.niculae@uva.nl
Pseudocode Yes Algorithm 2: Top-k permutations.
Open Source Code Yes The code for running the experiments is available at https://github.com/vzantedeschi/DAGuerreotype.
Open Datasets Yes Sachs (Sachs et al., 2005), a dataset of cytometric measurements of phosphorylated protein and phospholipid components in human immune system cells.
Dataset Splits No The paper discusses training and evaluation on datasets (Sachs, Syn TRe N) and synthetic data, but does not provide explicit train/validation/test dataset splits (e.g., percentages, sample counts, or a citation to predefined splits).
Hardware Specification Yes All experiments were run on a machine with 16 cores, 32Gb of RAM and an NVIDIA A100-SXM4-80GB GPU.
Software Dependencies No The paper states: "The method is implemented in (Py Torch, Paszke et al., 2019)" and "The tuning was carried out by Bayesian Optimization using (Optuna, Akiba et al., 2019)". While software is named and cited, specific version numbers for these dependencies are not provided.
Experiment Setup Yes We set the sparse operators temperature τ = 1 and K = 100, the strength of the l2 regularizations to 0.0005, and tuned the learning rates for the outer and inner optimization [10 4, 10 1] and pruning strength λ [10 6, 10 1].