LaSSL: Label-Guided Self-Training for Semi-supervised Learning

Authors: Zhen Zhao, Luping Zhou, Lei Wang, Yinghuan Shi, Yang Gao9208-9216

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate La SSL on several classification benchmarks under partially labeled settings and demonstrate its superiority over the state-of-the-art approaches. Through extensive experiments, we demonstrate that La SSL can propose better pseudo-labels with higher quality and quantity. Experiment results show that La SSL can outperform the SOTA SSL methods on four benchmark classification datasets with different amounts of labeled data, including CIFAR10, CIFAR100, SVHN, and Mini-Image Net. In this section, we conduct experiments on four classification datasets to test the effectiveness of La SSL
Researcher Affiliation Academia 1 School of Electrical and Information Engineering, University of Sydney, Australia 2 School of Computing and Information Technology, University of Wollongong, Australia 3 National Key Laboratory for Novel Software Technology, Nanjing University, China
Pseudocode Yes Algorithm 1: La SSL algorithm at each iteration
Open Source Code Yes The code is available at https://github.com/zhenzhao/lassl.
Open Datasets Yes CIFAR-10 (Krizhevsky and Hinton 2009), CIFAR-100 (Krizhevsky and Hinton 2009), SVHN (Netzer and Wang 2011) and Mini-Imagenet (Ravi and Larochelle 2017).
Dataset Splits No The paper mentions training and testing sets, and randomly selecting labeled data from the training set, but it does not specify a separate validation set split (e.g., percentages or counts for training, validation, and test splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU models, or memory specifications) used for running the experiments.
Software Dependencies No The paper mentions using a 'Wide Res Net-28-2 as the encoder', 'SGD optimizer', and 'learning rate scheduler with cosine decay' but does not specify any software frameworks (e.g., PyTorch, TensorFlow) or their version numbers.
Experiment Setup Yes The default settings for hyper-parameters in La SSL is B = 64, µ = 7, K = 7, α = 0.8, η = 0.2, τ = 0.95, ε = 0.7, Tt = 512, λ0 c = 1.0, ˆλc = 0.1. Besides, we adopt a SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4, and use a learning rate scheduler with cosine decay to train the model.