D-VAE: A Variational Autoencoder for Directed Acyclic Graphs

Authors: Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, Yixin Chen

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our proposed D-VAE through two tasks: neural architecture search and Bayesian network structure learning. Experiments show that our model not only generates novel and valid DAGs, but also produces a smooth latent space that facilitates searching for DAGs with better performance through Bayesian optimization.
Researcher Affiliation Academia Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, Yixin Chen Department of Computer Science and Engineering Washington University in St. Louis {muhan, jiang.s, z.cui, garnett}@wustl.edu, chen@cse.wustl.edu
Pseudocode No The paper describes the encoding and decoding procedures using text and illustrative figures, but does not include formal pseudocode blocks or algorithms labeled as such.
Open Source Code Yes All the code and data are available at https://github.com/muhanzhang/D-VAE.
Open Datasets Yes Our neural network dataset contains 19,020 neural architectures from the ENAS software [33]. ... on CIFAR-10 [60]. ... Our Bayesian network dataset contains 200,000 random 8-node Bayesian networks from the bnlearn package [61] in R.
Dataset Splits No We split the dataset into 90% training and 10% held-out test sets. ... We use the training set for VAE training, and use the test set only for evaluation.
Hardware Specification No The paper does not provide specific hardware details such as CPU/GPU models, memory, or cloud computing instances used for running experiments.
Software Dependencies No The paper mentions using the 'bnlearn package [61] in R' and 'ENAS software [33]', but does not provide specific version numbers for these software components or other libraries/dependencies.
Experiment Setup No The paper refers to 'Training details are in Appendix K' for more information, indicating that specific experimental setup details such as hyperparameters are not provided in the main text.