Convexified Graph Neural Networks for Distributed Control in Robotic Swarms

Authors: Saar Cohen, Noa Agmon

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, our methods are evaluated through the task in which robots aim to align their velocities and regulate their spacing. We compare our models to architectures found in the literature, where the later are solely trained offline. 8.1 Experimental Setup Evaluations [Cohen and Agmon, 2021] were conducted using a 12GB NVIDIA Tesla K80 GPU, implemented in PyTorch v1.7.0, accelerated with Cuda v10.1, and situated in the Gym Flock [Tolstaya et al., 2020] flocking environment. ... 8.2 Results We first compare both A-GNNs and TA-GNNs to their Half Convex counterparts and a CTA-GNN (Fig.1a).
Researcher Affiliation Academia Saar Cohen , Noa Agmon Department of Computer Science, Bar-Ilan University, Israel saar30@gmail.com, agmon@cs.biu.ac.il
Pseudocode Yes Algorithm 1 Learning Multi-Layer Cx-GNNs
Open Source Code Yes [Cohen and Agmon, 2021] Saar Cohen and Noa Agmon. Code implementation. https://github.com/saarcohen30/ convexified-gnn, 2021.
Open Datasets Yes Evaluations [Cohen and Agmon, 2021] were conducted using a 12GB NVIDIA Tesla K80 GPU, implemented in PyTorch v1.7.0, accelerated with Cuda v10.1, and situated in the Gym Flock [Tolstaya et al., 2020] flocking environment. For training each type of GNN, the Dataset Aggregation (DAgger) algorithm is used, following the learner s policy instead of the expert s with probability 1 β when collecting training trajectories [Ross et al., 2011], where β is decayed by a factor of 0.993 to a minimum of 0.5. The ADAM optimizer is used with learning rate 5 10 4, decaying factors 0.9, 0.999, and a MSE cost function. The dataset contains 400 trajectories for training, 40 for validation and 40 for testing, each of length 200 total.
Dataset Splits Yes The dataset contains 400 trajectories for training, 40 for validation and 40 for testing, each of length 200 total.
Hardware Specification Yes Evaluations [Cohen and Agmon, 2021] were conducted using a 12GB NVIDIA Tesla K80 GPU, implemented in PyTorch v1.7.0, accelerated with Cuda v10.1, and situated in the Gym Flock [Tolstaya et al., 2020] flocking environment.
Software Dependencies Yes Evaluations [Cohen and Agmon, 2021] were conducted using a 12GB NVIDIA Tesla K80 GPU, implemented in PyTorch v1.7.0, accelerated with Cuda v10.1, and situated in the Gym Flock [Tolstaya et al., 2020] flocking environment.
Experiment Setup Yes For training each type of GNN, the Dataset Aggregation (DAgger) algorithm is used, following the learner s policy instead of the expert s with probability 1 β when collecting training trajectories [Ross et al., 2011], where β is decayed by a factor of 0.993 to a minimum of 0.5. The ADAM optimizer is used with learning rate 5 10 4, decaying factors 0.9, 0.999, and a MSE cost function. The dataset contains 400 trajectories for training, 40 for validation and 40 for testing, each of length 200 total. All GNNs consist of two hidden layers, with 32 neurons each, where K=3, σ= tanh for nonconvex GNNs and σ= sin for convex GNNs. For our baseline scenario, we consider R=1m, v=3m/s, N=100. The robots locations were initialized uniformly on a disc of radius n to normalize the density of agents for changing flock sizes. αgnn=1, αcx-gnn=0.001 is chosen for all HC-GNNs.