Geometric Autoencoders - What You See is What You Decode
Authors: Philipp Nazari, Sebastian Damrich, Fred A Hamprecht
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments 5.1. Experimental Setup 5.2. Evaluation |
| Researcher Affiliation | Academia | 1HCI/IWR at University of Heidelberg, 69120 Heidelberg, Germany 2University of T ubingen, 72074 T ubingen, Germany. |
| Pseudocode | Yes | Algorithm 1 Calculating the Generalized Jacobian Determinant |
| Open Source Code | Yes | We provide the code as an open-source package for Py Torch. It can be found at https://github.com/hci-unihd/ Geometric Autoencoder. |
| Open Datasets | Yes | Datasets Besides the classical image datasets MNIST (Le Cun et al., 1998) and Fashion MNIST (Xiao et al., 2017), we use the three single-cell datasets Zilionis (Zilionis et al., 2019), CElegans (Packer et al., 2019) and PBMC (Zheng et al., 2017). |
| Dataset Splits | No | The paper does not explicitly mention a validation dataset split or a methodology for using one during training. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch', 'Geomstats package', and 'functorch library' but does not specify their version numbers. |
| Experiment Setup | Yes | All of the autoencoders except for the UMAP autoencoder are optimized using ADAM (Kingma & Ba, 2015), and trained using a batch size of 125, learning rate 10 3 and a weight decay of 10 5. ... The vanilla, topological and geometric autoencoders are trained for 100 epochs. For the proposed geometric autoencoder, we found α = 0.1 to be a good weight for the geometric loss term. |