GOT: An Optimal Transport framework for Graph comparison
Authors: Hermina Petric Maretic, Mireille El Gheche, Giovanni Chierchia, Pascal Frossard
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We finally demonstrate the performance of our novel framework on different tasks like graph alignment, graph classification and graph signal prediction, and we show that our method leads to significant improvement with respect to the state-of-art algorithms. |
| Researcher Affiliation | Academia | Hermina Petric Maretic Ecole Polytechnique Fédérale de Lausanne Signal Processing Laboratory (LTS4) Lausanne, Switzerland hermina.petricmaretic@epfl.ch Mireille EL Gheche Ecole Polytechnique Fédérale de Lausanne Signal Processing Laboratory (LTS4) Lausanne, Switzerland mireille.elgheche@epfl.ch Giovanni Chierchia Université Paris-Est, LIGM (UMR 8049) CNRS, ENPC, ESIEE Paris, UPEM F-93162, Noisy-le-Grand, France giovanni.chierchia@esiee.fr Pascal Frossard Ecole Polytechnique Fédérale de Lausanne Signal Processing Laboratory (LTS4) Lausanne, Switzerland pascal.frossard@epfl.ch |
| Pseudocode | Yes | Algorithm 1 Approximate solution to the graph alignment problem defined in (8). |
| Open Source Code | Yes | The code is available at https://github.com/Hermina/GOT. |
| Open Datasets | Yes | We use the MNIST dataset, which contains around 60000 images of size 28 28 displaying handwritten digits from 0 to 9, with 6000 per class. ... We repeated the same experiment on Fashion MNIST |
| Dataset Splits | No | The paper uses datasets like MNIST and Fashion MNIST but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or sample counts) needed to reproduce the experiments involving their proposed method or its applications like graph classification. |
| Hardware Specification | No | The paper mentions the software used for implementation (e.g., "implemented using automatic differentiation (in Py Torch with AMSGrad)"), but it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper states that the algorithm was "implemented using automatic differentiation (in Py Torch with AMSGrad)". However, it does not provide specific version numbers for PyTorch or any other ancillary software dependencies. |
| Experiment Setup | Yes | Prior to running experiments, we chose the parameters τ (Sinkhorn) and γ (learning rate) with grid search, while S (sampling size) was fixed empirically. In all experiments, we set τ = 5, γ = 0.2, and S = 30. We set the maximal number of Sinkhorn iterations to 10, and we run stochastic gradient descent for 3000 iterations (even though the algorithm converges long before, after around 1000 iterations, typically). As our algorithm seems robust to different initialisation, we used random initialisation in all our experiments. |