Hyperbolic Feature Augmentation via Distribution Estimation and Infinite Sampling on Manifolds

Authors: Zhi Gao, Yuwei Wu, Yunde Jia, Mehrtash Harandi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on few-shot learning and continual learning tasks show that our method significantly improves the performance of hyperbolic algorithms in scarce data regimes.
Researcher Affiliation Academia 1 Beijing Lab of Intellegent Information Technology, School of Computer Science, Beijing Institute of Technology, China 2 Guangdong Lab of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, China 3 Department of Electrical and Computer Systems Eng., Monash University, and Data61, Australia
Pseudocode Yes Algorithm 1 Sampling process in HFA. Algorithm 2 Training process of HFA.
Open Source Code Yes The code of HFA is available at https: //github.com/Zhi Gaomcislab/Hyperbolic_Feature_Augmentation.
Open Datasets Yes We conducted experiments on four few-shot learning datasets: mini-Image Net [48], tiered Image Net [49], CUB [7], and CIFAR-FS [50] datasets. We use the CIFAR-100 dataset [50] with 100 classes.
Dataset Splits Yes In the training stage, we randomly select few data from D as the training data Dt, and the rest is used as the validation data Dv.
Hardware Specification No The paper states that hardware details can be found in the supplementary materials, but does not provide specific hardware details (e.g., exact GPU/CPU models, memory) in the main text.
Software Dependencies No The paper does not provide specific software dependencies with version numbers in the main text.
Experiment Setup Yes We use Res Net12 and Res Net18 as the backbone networks, and perform augmentation for the support data. The training process for the gradient flow networks F1, F2, and F3 is described within a meta-learning framework using a bi-level optimization manner.