Learning Hyperbolic Representations of Topological Features

Authors: Panagiotis Kyriakis, Iordanis Fostiropoulos, Paul Bogdan

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experimental results on graph and image classification tasks and show that the performance of our method is on par with or exceeds the performance of other state of the art methods.
Researcher Affiliation Academia Panagiotis Kyriakis University of Southern California Los Angeles, USA pkyriaki@usc.edu Iordanis Fostiropoulos University of Southern California Los Angeles, USA fostirop@usc.edu Paul Bogdan University of Southern California Los Angeles, USA pbogdan@usc.edu
Pseudocode No The paper describes the method using mathematical equations and figures, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes The code to reproduce our experiments is publicly available at https://github.com/pkyriakis/permanifold/.
Open Datasets Yes We present experiments on diverse datasets focusing on persistence diagrams extracted from graphs and grey-scale images. The REDDIT-BINARY dataset contains 1000 samples... The IMDB-BINARY contains 1000 ego-networks... Finally, the IMDB-MULTI contains 1500 ego-networks... We utilize two standardized datasets: the MNIST, which contains images of handwritten digits, and the Fashion-MNIST, which contains shape images of different types of garment...
Dataset Splits Yes We train our model using 10-fold cross-validation with a 80/20 split. Each dataset contains a total of 70K (60K train, 10K validation, 10 folds) grey-scale images of size 28 28.
Hardware Specification No The paper states 'run all experiments on the Google Cloud AI Platform', but does not provide specific hardware details such as GPU/CPU models or memory.
Software Dependencies Yes We implemented all algorithms in Tensor Flow 2.2 using the TDA-Toolkit2 and the Scikit-TDA3 for extracting persistence diagrams and run all experiments on the Google Cloud AI Platform.
Experiment Setup Yes To train the neural network, we use the Adam optimizer (β1 = 0.9, β2 = 0.999) with an initial learning rate of 0.001 and batch size equal to 64. We use a random uniform initializer in the interval [ 0.05, 0.05] for all learnable variables. For all graph datasets we trained the network for 100 epochs and halved the learning rate every 25 epochs. For the MNIST and Fashion-MNIST datasets we used 10 and 20 epochs, respectively, and no learning rate scheduler. We tune the dropout rate manually by starting from really low values and monitoring the validation set accuracy. In general, we noticed that the network did not tend to overfit, therefore, we kept the rate at low values (0-0.2) for all experiments.