Robust Angular Synchronization via Directed Graph Neural Networks

Authors: Yixuan He, Gesine Reinert, David Wipf, Mihai Cucuringu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on extensive data sets demonstrate that GNNSync attains competitive, and often superior, performance against a comprehensive set of baselines for the angular synchronization problem and its extension, validating the robustness of GNNSync even at high noise levels.
Researcher Affiliation Collaboration Yixuan He & Gesine Reinert Department of Statistics University of Oxford Oxford, United Kingdom {yixuan.he, reinert}@stats.ox.ac.uk David Wipf Amazon Shanghai, China daviwipf@amazon.com Mihai Cucuringu Department of Statistics University of Oxford Oxford, United Kingdom mihai.cucuringu@stats.ox.ac.uk
Pseudocode Yes Algorithm 1 Projected Gradient Steps
Open Source Code Yes To fully reproduce our results, anonymized code is available at https://github.com/Sheryl HYX/GNN_Sync.
Open Datasets Yes For real-world data, we conduct sensor network localization on the U.S. map and the PACM point cloud data set (Cucuringu et al., 2012a)
Dataset Splits No The paper describes parameters for synthetic data generation (e.g., 'edge density parameter p P t0.05, 0.1, 0.15u, noise level η P t0, 0.1, . . . , 0.9u') which define the conditions for experiments, but it does not specify traditional train/validation/test dataset splits of a fixed dataset.
Hardware Specification Yes Experiments were conducted on two compute nodes, each with 8 Nvidia Tesla T4, 96 Intel Xeon Platinum 8259CL CPUs @ 2.50GHz and 378GB RAM.
Software Dependencies No The paper mentions software like 'PyTorch autograd' and 'Network X' but does not specify their version numbers, which are necessary for full reproducibility.
Experiment Setup Yes We use the whole graph for training for at most 1000 epochs, and stop early if the loss value does not decrease for 200 epochs. We use Stochastic Gradient Descend (SGD) as the optimizer and ℓ2 regularization with weight decay 5 10 4 to avoid overfitting. We use as learning rate 0.005 throughout.