PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs

Authors: Zehao Dong, Muhan Zhang, Fuhai Li, Yixin Chen

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the superiority of PACE through encoder-dependent optimization subroutines that search the optimal DAG structure based on the learned DAG embeddings. Experiments show that PACE not only improves the effectiveness over previous sequential DAG encoders with a significantly boosted training and inference speed, but also generates smooth latent (DAG encoding) spaces that are beneficial to downstream optimization subroutines.
Researcher Affiliation Academia 1Department of Computer Science & Engineering, Washington University in St. Louis, St. Louis, USA 2Institute for Artificial Intelligence, Peking University, Beijing, China 3Beijing Institute for General Artificial Intelligence, Beijing, China 4Institute for Informatics and Department of Pediatrics, Washington University in St. Louis, St. Louis, USA. Correspondence to: Yixin Chen <chen@cse.wustl.edu>.
Pseudocode Yes Algorithm 1 DFS Algorithm... Algorithm 2 Floyd Algorithm
Open Source Code Yes Our source code is available at https: //github.com/zehao-dong/PACE.
Open Datasets Yes The dataset NA consists of approximately 19K neural architectures generated by the software ENAS (Pham et al., 2018)... The dataset BN consists of 200K Bayesian networks randomly generated by the bnlearn package (Scutari, 2010)... NAS101 (Ying et al., 2019)... NAS301 (Siems et al., 2020)... OGBG-CODE2 (Hu et al., 2020).
Dataset Splits Yes Following the experimental settings used in (Zhang et al., 2019), PACE is evaluated under a VAE architecture, and we take 90% NA/BN data as the training set and hold out the rest for testing.
Hardware Specification Yes All the experiments are done on NVIDIA Tesla P100 12GB GPUs.
Software Dependencies No The paper mentions software like 'ENAS', 'bnlearn package', 'Nauty', but does not provide specific version numbers for any key software components or libraries like Python, PyTorch, TensorFlow, etc.
Experiment Setup Yes In the experiments, PACE uses 3 Transformer encoder blocks to boost the training and inference speed. The dimension of the embedding layer that maps node types to embeddings is 64. The output dimension of the 1-layer GNN in dag2seq is also 64. On NA and BN, we concatenate the positional encodings and node type embeddings as the node features fed into the first Transformer encoder block. On NAS101 and NAS301, we use the summation of positional encodings and node type embeddings, instead.