Connectivity-Optimized Representation Learning via Persistent Homology

Authors: Christoph Hofer, Roland Kwitt, Marc Niethammer, Mandar Dixit

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. One-class learning experiments ( 5) on large-scale vision data, showing that kernel-density based one-class models can be built on top of representations learned by a single autoencoder. These representations are transferable across datasets and, in a low sample size regime, our one-class models outperform recent stateof-the-art methods by a large margin.
Researcher Affiliation Collaboration 1Department of Computer Science, University of Salzburg, Austria 2Microsoft 3UNC Chapel Hill.
Pseudocode No The paper includes mathematical definitions and theorems but no structured pseudocode or algorithm blocks with typical formatting or explicit labels like 'Algorithm' or 'Pseudocode'.
Open Source Code Yes https://github.com/c-hofer/COREL_icml2019
Open Datasets Yes CIFAR-10 (Krizhevsky & Hinton, 2009)... CIFAR-100... Tiny-Image Net... Image Net. For large-scale testing, we use the ILSVRC 2012 dataset (Deng et al., 2009)
Dataset Splits Yes CIFAR-10 (Krizhevsky & Hinton, 2009) contains 60,000 natural images of size 32 32 in 10 classes. 5,000 images/class are available for training, 1,000/class for validation.
Hardware Specification Yes On one GPU (Nvidia GTX 1080 Ti) this requires 75 hrs.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2017)' but does not provide a specific version number for PyTorch or any other software dependency.
Experiment Setup Yes The MLP is trained for 60 epochs with batch size 50 and η = 2. For optimization, we use Adam (Kingma & Ba, 2014) with a fixed learning rate of 0.001, (β1, β2) = (0.9, 0.999) and a batch-size of 100. The model is trained for 50 epochs. We fix η = 2 throughout our experiments.