Learning to Configure Computer Networks with Neural Algorithmic Reasoning

Authors: Luca Beurer-Kellner, Martin Vechev, Laurent Vanbever, Petar Veličković

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct an extensive evaluation of our learning-based synthesizer with respect to both precision and scalability. We demonstrate that our learned synthesizer is up to 490 faster than a state-of-the-art SMT-based tool while producing high utility configurations which on average satisfy > 93% of provided constraints (Section 5).
Researcher Affiliation Collaboration Luca Beurer-Kellner1, Martin Vechev1 Laurent Vanbever1 Petar Veliˇckovi c2 1ETH Zurich, Switzerland 2DeepMind
Pseudocode No The paper describes the model architecture and process in detail but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes https://github.com/eth-sri/learning-to-configure-networks
Open Datasets Yes Each dataset comprises 8 real-world topologies taken from the Topology Zoo [29], where the number of nodes lies between 0-18, 18-39, and 39-153, respectively.
Dataset Splits No No explicit training/validation/test dataset splits (e.g., percentages or absolute counts) are provided in the main text of the paper. It mentions training on "10,240 samples" and using "Small (S), Medium (M), and Large (L)" datasets for evaluation, but the split methodology for these datasets is not specified beyond their generation process.
Hardware Specification Yes We run all experiments on an Intel(R) i9-9900X@3.5GHz machine with 64GB of system memory and an NVIDIA RTX 3080 GPU with 10GB of video memory.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as PyTorch, TensorFlow, or any GNN libraries. It mentions using GNNs and GAT layers, but without associated version numbers for reproducibility.
Experiment Setup Yes The processor PROCGAT is modelled as an iterative process. It consists of a 6-layer graph attention module which we apply for a total of 4 iterations.