Few-Shot Partial-Label Learning

Authors: Yunfeng Zhao, Guoxian Yu, Lei Liu, Zhongmin Yan, Lizhen Cui, Carlotta Domeniconi

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on widely-used few-shot datasets demonstrate that our Fs PLL can achieve a superior performance than the state-of-the-art methods, and it needs fewer samples for quickly adapting to new tasks. Extensive experiments on benchmark few-shot datasets show that our Fs PLL outperforms the state-of-the-art PLL approaches [Zhang et al., 2016; Wu and Zhang, 2018; Feng and An, 2019; Wang et al., 2019] and baseline FSL methods [Snell et al., 2017; Finn et al., 2017].
Researcher Affiliation Academia Yunfeng Zhao1,2 , Guoxian Yu1,2 , Lei Liu1,2 , Zhongmin Yan1,2 , Lizhen Cui1,2 , Carlotta Domeniconi3 1School of Software Engineering, Shandong University, Jinan, Shandong, China 2Joint SDU-NTU Centre for Artificial Intelligence Research, Shandong University, Jinan, China 3Department of Computer Science, George Mason University, VA, USA
Pseudocode No The paper does not contain a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link for open-source code for the methodology described.
Open Datasets Yes We conduct experiments on two benchmark FSL datasets (Omniglot [Lake et al., 2011] and mini Image Net [Vinyals et al., 2016]).
Dataset Splits Yes Each Dt train consisted of N1 = 30 classes were randomly sampled from 4800/80 train classes of Omniglot/mini Image Net without replacement. As to the meta-testing set, we randomly selected another N2 classes from 1692/20 test classes without replacement. For each selected class, K1 = 5 (K2) samples were randomly chosen from 20/600 samples without replacement for the meta-training (meta-testing) support samples, and the remaining/15 samples per class were randomly chosen as the query samples. They only use the samples in meta-testing set for training and validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running its experiments.
Software Dependencies No The paper mentions the Adam optimizer but does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes As to our Fs PLL, the trade-off parameter λ is fixed as 0.5 (0 for Fs PLLn M), the number of nearest neighbors k = K2 1, the number of iterations for prototype rectification in each epoch is fixed to 10. In addition, we use the Adam [Kingma and Ba, 2015] optimizer, the learning rate is fixed as 0.001 and cut into half per 20 epochs.