Universal Semi-Supervised Learning
Authors: Zhuo Huang, Chao Xue, Bo Han, Jian Yang, Chen Gong
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Exhaustive experiments on several benchmark datasets show the effectiveness of our method in tackling open-set problems. In this section, we first specify the implementation details2 in Section 4.1. Then, to thoroughly validate the proposed CAFA approach, we conduct extensive experiments under different scenarios of open-set SSL by comparing our method with popular closed-set methods as well as several existing open-set methods in Section 4.2. Finally, we present the detailed performance analysis of our method in Section 4.3. |
| Researcher Affiliation | Collaboration | Zhuo Huang1,2,3, Chao Xue3, Bo Han4, Jian Yang1,2, Chen Gong1,2,3( ) 1PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education 2Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology 3JD Explore Academy 4Department of Computer Science, Hong Kong Baptist University |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is openly available or provide a direct link to a repository in the main text. |
| Open Datasets | Yes | Datasets. We use CIFAR-10 [23], Office-31 [35], and Vis DA2017 [33] to evaluation our method. |
| Dataset Splits | No | The paper specifies the number of labeled and unlabeled instances but does not provide specific training, validation, or test dataset splits or percentages. |
| Hardware Specification | Yes | We implement all methods in Py Torch and run all experiments on a single Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with version details. |
| Experiment Setup | Yes | The batch size is set to 100 for CIFAR-10 dataset and 64 for other datasets. We adopt SGD optimizer with the initial learning rate 3 10 4. The perturbation magnitude ϵ is set to 0.014 and the Beta distribution parameter α is set to 0.75. |