Preventing Model Collapse in Gaussian Process Latent Variable Models

Authors: Ying Li, Zhidi Lin, Feng Yin, Michael Minyi Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed GPLVM, named advised RFLVM, is evaluated across diverse datasets and consistently outperforms various salient competing models, including state-of-the-art variational autoencoders (VAEs) and other GPLVM variants, in terms of informative latent representations and missing data imputation.6. Experiments We showcase the impact of the projection variance and kernel flexibility on model collapse in 6.1 and 6.2. In 6.3 and 6.4, we further corroborate the superior performance of advised RFLVM in latent representation learning on various real-world datasets.
Researcher Affiliation Academia 1Department of Statistics & Actuarial Science, The University of Hong Kong, Hong Kong, China 2School of Science & Engineering, The Chinese University of Hong Kong, Shenzhen, China.
Pseudocode Yes Algorithm 1: advised RFLVM: Auto-Differentiable Variational Inference for SM-Embedded RFLVMs
Open Source Code Yes More experimental details can be found in App. E, and the code is publicly available at https://github.com/zhidilin/advised GPLVM.
Open Datasets Yes We evaluate the advised RFLVM on the MNIST dataset (Le Cun, 1998). ... CIFAR-10: To create a final dataset of size 2000, we subsampled 400 images from each class within [airplane, automobile, bird, cat, deer].
Dataset Splits Yes Table 1: Classification accuracy evaluated by fitting a KNN classifier (k = 1) with five-fold cross-validation.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided.
Software Dependencies No Mentions 'Py Torch (Paszke et al., 2019)', 'GPy library', 'GPy Torch', and 'sklearn.decomposition module within the scikit-learn library (Buitinck et al., 2013)', but does not provide specific version numbers for these software components.
Experiment Setup Yes Table 4: Default hyperparameter settings. PARAMETER VALUE # MIXTURE DENSITIES IN SM KERNEL (m) 2 DIM. OF RANDOM FEATURE (L) 50 DIM. OF LATENT SPACE (Q) 2 OPTIMIZER ADAM (KINGMA & BA, 2014) LEARNING RATE 0.005 BETA (0.9, 0.99) # ITERATIONS 10000