Interferometric Graph Transform: a Deep Unsupervised Graph Representation

Authors: Edouard Oyallon

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our algorithm on various and challenging tasks such as image classification (MNIST, CIFAR-10), community detection (Authorship, Facebook graph) and action recognition from 3D skeletons videos (SBU, NTU), exhibiting a new state-of-the-art in spectral graph unsupervised settings. 4. Numerical experiments
Researcher Affiliation Academia Edouard Oyallon 1 1CNRS, LIP6. Correspondence to: Edouard Oyallon <edouard.oyallon@lip6.fr>.
Pseudocode No The paper describes algorithmic steps, such as in Section 3.3.2 'A PROJECTED GRADIENT METHOD', but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code Yes The corresponding code can be found here: https://github.com/edouardoyallon/interferometric-graph-transform.
Open Datasets Yes We test our algorithm on various and challenging tasks such as image classification (MNIST, CIFAR-10), community detection (Authorship, Facebook graph) and action recognition from 3D skeletons videos (SBU, NTU)...
Dataset Splits Yes CIFAR-10 is a challenging dataset of small 32 32 colored images, which consists of 5 10^4 images for training and 10^4 for testing. [...] MNIST is a simple dataset of small 28 28 images, which consists of 6 10^4 images for training and 10^4 for testing. [...] Eeach SBU sample describes a two person interaction, and SBU contains 230 sequences and 8 classes (6,614 frames). The accuracy is reported as the mean of the accuracies of a 5-fold procedure. [...] In cross-subject evaluation, 40 subjects are split into training and testing groups, consisting of 20 subjects such that the training and testing sets have respectively 40,320 and 16,560 samples. For cross-view evaluation, the samples are split according to different cameras view, such that the training and testing sets have respectively 37,920 and 18,960 samples.
Hardware Specification No The author was supported by a GPU donation from NVIDIA.
Software Dependencies No The paper mentions software like 'linear SVM, as implemented by Fan et al. (2008)' and uses 'SGD', but it does not specify any programming languages or specific library versions required for reproducibility.
Experiment Setup Yes The operator W1 is learned via SGD with batch size 64, for 5 epochs. We reduced an initial learning rate of 1.0 by 10 at iterations 500, 1000 and 1500. [...] We used an order 2 IGT, with K1 = K2 = 30 filters for each operator. We train our operators for 5 epochs, with a batch size of 64, an initial learning rate of 1.0, dropped by 10 at the iterations 10, 20 and 30. [...] We use K1 = 10 and K2 = 5 filters respectively for our two learned operators. We trained via SGD our representation, with a batch size of 64, an initial learning rate of 1.0 being dropped by 10 at iterations 100, 200 and 300.