Deep Learning with Topological Signatures

Authors: Christoph Hofer, Roland Kwitt, Marc Niethammer, Andreas Uhl

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Classification experiments on 2D object shapes and social network graphs demonstrate the versatility of the approach and, in case of the latter, we even outperform the state-of-the-art by a large margin. 5 Experiments To demonstrate the versatility of the proposed approach, we present experiments with two totally different types of data: (1) 2D shapes of objects, represented as binary images and (2) social network graphs, given by their adjacency matrix. In both cases, the learning task is classification.
Researcher Affiliation Academia Christoph Hofer Department of Computer Science University of Salzburg, Austria chofer@cosy.sbg.ac.at Roland Kwitt Department of Computer Science University of Salzburg, Austria Roland.Kwitt@sbg.ac.at Marc Niethammer UNC Chapel Hill, NC, USA mn@cs.unc.edu Andreas Uhl Department of Computer Science University of Salzburg, Austria uhl@cosy.sbg.ac.at
Pseudocode No No pseudocode or algorithm blocks are explicitly labeled or presented in the paper.
Open Source Code Yes Source code is publicly-available at https://github.com/c-hofer/nips2017.
Open Datasets Yes We apply persistent homology combined with our proposed input layer to two different datasets of binary 2D object shapes: (1) the Animal dataset, introduced in [3] which consists of 20 different animal classes, 100 samples each; (2) the MPEG-7 dataset which consists of 70 classes of different object/animal contours, 20 samples each (see [21] for more details). ... We evaluate our approach on the challenging problem of social network classification, using the two largest benchmark datasets from [31], i.e., reddit-5k (5 classes, 5k graphs) and reddit-12k (11 classes, 12k graphs).
Dataset Splits No In each experiment we ensured a balanced group size (per label) and used a 90/10 random training/test split; all reported results are averaged over five runs with fixed ν = 0.1.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No All experiments were implemented in Py Torch3, using DIPHA4 and Perseus [23].
Experiment Setup Yes We use cross-entropy loss to train the network for 400 epochs, using stochastic gradient descent (SGD) with mini-batches of size 128 and an initial learning rate of 0.1 (halved every 25-th epoch). ... We train the network for 500 epochs using SGD and cross-entropy loss with an initial learning rate of 0.1 (reddit_5k), or 0.4 (reddit_12k).