Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation

Authors: Jiachen Liang, RuiBing Hou, Hong Chang, Bingpeng MA, Shiguang Shan, Xilin Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our proposed SSFA is applicable to various pseudo-label-based SSL learners and significantly improves performance in labeled, unlabeled, and even unseen distributions.
Researcher Affiliation Academia 1 Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences
Pseudocode No The paper does not include any figure, block, or section explicitly labeled "Pseudocode" or "Algorithm," nor does it present structured steps for a method formatted like code.
Open Source Code No The paper does not include any explicit statement from the authors about releasing their code for the described methodology, nor does it provide a direct link to a source-code repository.
Open Datasets Yes For the image corruption experiments, we create the labeled domain by sampling images from CIFAR100 [20], and the unlabeled domain by sampling images from a mixed dataset consisting of CIFAR100 and CIFAR100 with Corruptions (CIFAR100-C). ... For the style change experiments, we use OFFICE-31 [27] and OFFICE-HOME [36] benchmarks.
Dataset Splits No The paper mentions different amounts of 'labeled' data used (e.g., '400 labeled', '4000 labeled', '10000 labeled') and evaluation protocols for labeled, unlabeled, and unseen distributions, but it does not explicitly specify the training, validation, and test dataset splits (e.g., as percentages or counts) or reference predefined splits for reproducibility beyond stating test distributions.
Hardware Specification No The paper specifies model architectures used as backbones (e.g., WRN-28-8, ResNet-50) but does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models, processors, or memory.
Software Dependencies No The paper does not provide specific version numbers for any software components, libraries, or frameworks used in the experiments (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup No The paper mentions that 'λu and λa are hyper-parameters denoting the relative weights of Lu and Laux respectively' and discusses the 'confident threshold τ', but it does not explicitly list the specific values of these or other common hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) used in the experimental setup.