Hierarchically Structured Meta-learning

Authors: Huaxiu Yao, Ying Wei, Junzhou Huang, Zhenhui Li

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our approach can achieve state-of-the-art performance in both toy-regression and few-shot image classification problems.
Researcher Affiliation Collaboration 1College of Information Science and Technology, Pennsylvania State University, PA, USA 2Tencent AI Lab, Shenzhen, China.
Pseudocode Yes Algorithm 1 Meta-training of HSML
Open Source Code No The paper does not provide any explicit statements about open-sourcing code or links to a code repository.
Open Datasets Yes Caltech-UCSD Birds-200-2011 (Bird) (Wah et al., 2011), Describable Textures Dataset (Texture) (Cimpoi et al., 2014), Fine-Grained Visual Classification of Aircraft (Aircraft) (Maji et al., 2013), and FGVCx-Fungi (Fungi) (Fun, 2018)
Dataset Splits Yes Similar to the preprocessing of Mini Imagenet (Vinyals et al., 2016), we divide each dataset to meta-training, meta-validation and meta-testing classes.
Hardware Specification No The paper does not provide specific hardware details such as CPU/GPU models or cloud configurations used for experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper states 'We specify the hyperparameters for meta-training in supplementary material C.' but does not include these specific details in the main text.