Neural Topological Ordering for Computation Graphs

Authors: Mukul Gagrani, Corrado Rainone, Yang Yang, Harris Teague, Wonseok Jeon, Roberto Bondesan, Herke van Hoof, Christopher Lott, Weiliang Zeng, Piero Zappi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train our model on a dataset of synthetically generated graphs called layered graphs. We show that our model outperforms, or is on-par, with several topological ordering baselines while being significantly faster on synthetic graphs with up to 2k nodes. We also train and test our model on a set of real-world computation graphs, showing performance improvements.
Researcher Affiliation Collaboration Mukul Gagrani Qualcomm AI Research mgagrani@qti.qualcomm.com Corrado Rainone Qualcomm AI Research crainone@qti.qualcomm.com Yang Yang Google LLC Harris Teague Qualcomm AI Research Wonseok Jeon Qualcomm AI Research Herke Van Hoof University of Amsterdam, Netherlands Weiliang Will Zeng Qualcomm AI Research Piero Zappi Qualcomm AI Research Christopher Lott Qualcomm AI Research Roberto Bondesan Qualcomm AI Research
Pseudocode No The paper describes algorithms (e.g., for layered graph generation and decoding methods) but does not present them in a structured pseudocode block or a clearly labeled 'Algorithm' section.
Open Source Code No The code and the data are proprietary.
Open Datasets Yes To test our method, we introduce an algorithm for the generation of synthetic, layered, Neural Net-like computation graphs, allowing any researcher to generate a dataset of as many as desired graphs of any desired size. We refer the reader to the appendix for more details on layered graphs, including their generation algorithm and some visual examples.
Dataset Splits No The paper states 'We train our model on 500-node layered graphs' and 'We test the performance of our model on a set of 300 unseen graphs'. For real-world graphs, it mentions 'We split this dataset into a training set and test set via a random 80 20 split'. However, it does not explicitly specify a validation dataset split (e.g., percentages or counts for a validation set) within the provided text.
Hardware Specification No The paper states 'Please see the appendix for details on compute resources' but the appendix content is not provided in the given text. Therefore, specific hardware details are not available in the provided paper text.
Software Dependencies No The paper mentions software components such as 'Graph Neural Network', 'Topoformer', 'pygraphviz', 'MLP', and 'Transformer', but it does not specify any version numbers for these software dependencies (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes We train our model on 500-node layered graphs for 325 epochs, where in each epoch we generate a training set of 1000 new graphs. We use a sample size and beam size of 16 sequences, of which the best one is subsequently picked, for all our experiments. We train our model for 500 epochs and report the performance on the unseen test set at the end of training in table 2. We run the inference on our test set of 300 graphs 10 times for each model to be more precise in our run time calculations.