Graph Convolutional Gaussian Processes

Authors: Ian Walker, Ben Glocker

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present applications of graph convolutional Gaussian processes to images and triangular meshes, demonstrating their versatility and effectiveness, comparing favorably to existing methods, despite being relatively simple models. We present applications of GCGPs to graphs which are Euclidean sampling grids (images) and further demonstrate the GCGP s performance when learning on non-Euclidean domains for classification tasks. We apply our method to triangular meshes and to an MNIST superpixel dataset, where each image is represented as a distinct graph. While graph convolutional GPs are shallow, though wide, the results are promising for such relatively terse models, and indicate that GCGPs can provide a simple and effective foundation for more complex models in the future. In this section, we present applications of GCGPs to both regular domains (images) and non-regular domains (general graphs, meshes).
Researcher Affiliation Academia 1Department of Computing, Imperial College London, United Kingdom. Correspondence to: Ian Walker <ian.walker14@imperial.ac.uk>.
Pseudocode No The paper describes procedures and algorithms, particularly for angular distance computation, but it does not provide any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code No The paper mentions using 'the GPFlow package' (Matthews et al., 2017) which is a third-party library, but it does not provide an explicit statement or link for the authors' own implementation code for the methodology described.
Open Datasets Yes We first consider classification of the standard MNIST dataset. To demonstrate the performance of GCGPs on general graphs, we apply the model to the 75-vertex MNIST superpixel dataset, following the methodology of Monti et al. (2017). The data is a collection of 100 meshes from the MPI Faust dataset (Bogo et al., 2014).
Dataset Splits No The paper specifies training and test set splits, but does not explicitly mention a separate validation set with specific percentages or counts for reproduction. For instance, 'The resulting dataset follows the same training and test set split as the standard MNIST dataset with 60,000 and 10,000 observations in each respectively.' No explicit validation split is mentioned.
Hardware Specification No The paper states that experiments were implemented using the GPFlow package but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run these experiments.
Software Dependencies No All experiments were implemented using the GPFlow package (Matthews et al., 2017). The paper mentions the software package used, but does not specify a version number for it.
Experiment Setup Yes During training, mini-batches of size 200 were used along with 750 inducing points and a learning rate of 0.001. The ρks were initialized to {0, 1, 2}, such that the radial bins would be centered on a given pixel, plus the rings one and two pixels away. The σρ was initialized to 1, so 68% of the weight is within one pixel distance from the centeral vertex. For these experiments we use a batch size of 30 along with 750 inducing points and a learning rate of 0.001.