The Numerical Stability of Hyperbolic Representation Learning

Authors: Gal Mishne, Zhengchao Wan, Yusu Wang, Sheng Yang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments
Researcher Affiliation Academia 1Halıcıo glu Data Science Institute, University of California San Diego, La Jolla, California, USA 2Harvard John A. Paulson School of Engineering and Applied Science, Harvard University, Cambridge, Massachusetts, USA.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It provides mathematical derivations and descriptions of methods in text and equations.
Open Source Code Yes Code for reproducing our experiments is available at https://github.com/yangshengaa/stable-hyperbolic.
Open Datasets Yes We tested the performances on three datasets: CIFAR-10 (Krizhevsky et al., 2009), fashion-MNIST (Xiao et al., 2017), Paul Myeloid Progenitors developmental dataset (Paul et al., 2015), Olsson Single-Cell RNA sequencing dataset (Olsson et al., 2016), Krumsiek Simulated Myeloid Progenitors (Krumsiek et al., 2011), and Moignard blood cell developmental trace from single-cell gene expression (Moignard et al., 2015).
Dataset Splits No For each dataset, we fix a train-test split and run 5 times. ... For all other datasets, we utilize a 75%/25% train-test split stratified based on the class assignments. The paper does not explicitly mention a separate validation dataset split or provide details for how such a split would be performed.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using 'scikit-learn (Pedregosa et al., 2011)' and 'Py Torch (Paszke et al., 2017)' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes We use Riemannian SGD (Becigneul & Ganea, 2018) for hyperbolic models and SGD for the Euclidean model, fixing a learning rate of 1 and train for 30000 epochs. ... The best performances of the Euclidean and Poincar e SVM are both using C = 5, with a learning rate of 0.001 and 3000 epochs. ... the best performance of LSVM and LSVMPP are in general brought by C = 0.5, with a learning rate around 10^-10 (depending on the initial scale of the dataset) with 500 epochs.