Robust Semi-Supervised Learning when Not All Classes have Labels
Authors: Lan-Zhe Guo, Yi-Ge Zhang, Zhi-Fan Wu, Jie-Jing Shao, Yu-Feng Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical results show our approach achieves significant performance improvement in both seen and unseen classes compared with previous studies. |
| Researcher Affiliation | Collaboration | Lan-Zhe Guo , Yi-Ge Zhang , Zhi-Fan Wu, Jie-Jing Shao, Yu-Feng Li National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China {guolz,zhangyg,wuzf,shaojj,liyf}@lamda.nju.edu.cn. ... This research was supported by the National Key R&D Program of China (2022YFC3340901), the National Science Foundation of China (62176118, 61921006), and the Huawei Cooperation Fund. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://www.lamda.nju.edu.cn/code_NACH.ashx |
| Open Datasets | Yes | We evaluate NACH and compared methods on three SSL benchmark datasets CIFAR-10, CIFAR-100 [18] and Image Net [27]. |
| Dataset Splits | No | The paper describes how classes are divided (e.g., 50% seen and 50% unseen, then 50% of seen classes as labeled data), but does not provide explicit train/validation/test dataset splits (e.g., percentages or sample counts for each split) needed to reproduce the experiment's data partitioning. |
| Hardware Specification | Yes | All experiments are performed on a single NVIDIA 3090 GPU. |
| Software Dependencies | No | The paper mentions using Res Net-18 and Res Net-50 as backbone models and Sim CLR, but does not provide specific version numbers for software dependencies (e.g., PyTorch version, Python version). |
| Experiment Setup | Yes | For CIFAR datasets, we use Res Net-18 as the backbone model. The model is trained by using the standard Stochastic Gradient Descent method with a momentum of 0.9 and a weight decay of 0.0005. We trained the model for 200 epochs with a batch size of 512. |