Topological Autoencoders
Authors: Michael Moor, Max Horn, Bastian Rieck, Karsten Borgwardt
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our approach is theoretically well-founded and that it exhibits favourable latent representations on a synthetic manifold as well as on real-world image data sets, while preserving low reconstruction errors. 5. Experiments |
| Researcher Affiliation | Academia | 1Department of Biosystems Science and Engineering, ETH Zurich, 4058 Basel, Switzerland 2SIB Swiss Institute of Bioinformatics, Switzerland. Correspondence to: Karsten Borgwardt <karsten.borgwardt@bsse.ethz.ch>. |
| Pseudocode | No | The paper describes its methods but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We make our code publicly available.4 https://github.com/BorgwardtLab/topological-autoencoders |
| Open Datasets | Yes | We generate a SPHERES data set that consists of ten highdimensional 100-spheres living in a 101-dimensional space... We also use three image data sets (MNIST, FASHION-MNIST, and CIFAR-10) |
| Dataset Splits | Yes | We split each data set into training and testing (using the predefined split if available; 90% versus 10% otherwise). Additionally, we remove 15% of the training split as a validation data set for tuning the hyperparameters. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments. |
| Software Dependencies | No | The paper mentions software like ADAM for optimization, but does not specify version numbers for any key software components or libraries. |
| Experiment Setup | Yes | We split each data set into training and testing (using the predefined split if available; 90% versus 10% otherwise). Additionally, we remove 15% of the training split as a validation data set for tuning the hyperparameters. We normalised our topological loss term by the batch size m in order to disentangle λ from it. All autoencoders employ batch-norm and are optimized using ADAM (Kingma & Ba, 2014). Please refer to Section A.6 for more details on architectures and hyperparameters. |