InstanT: Semi-supervised Learning with Instance-dependent Thresholds
Authors: Muyang Li, Runze Wu, Haoyu Liu, Jun Yu, Xun Yang, Bo Han, Tongliang Liu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we show that our proposed method is able to surpass state-of-the-art (SOTA) SSL methods across multiple commonly used benchmark datasets. |
| Researcher Affiliation | Collaboration | Muyang Li1, Runze Wu2, Haoyu Liu2, Jun Yu3, Xun Yang3, Bo Han4, Tongliang Liu1 1Sydney AI Center, The University of Sydney; 2FUXI AI Lab, Net Ease; 3University of Science and Technology of China; 4Hong Kong Baptist University |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. Figure 2 provides an illustration but is not a pseudocode representation. |
| Open Source Code | Yes | Our implementation is available at https://github.com/tmllab/2023_Neur IPS_Instan T. |
| Open Datasets | Yes | We use three benchmark datasets for evaluating the performances of Instan T, they are: CIFAR-10, CIFAR-100 [21], and STL-10 [13]. |
| Dataset Splits | Yes | To ensure fair comparison between our method and all baselines, and to allow simple reproduction of our experimental results, we implemented Instan T and conducted all experiments within USB (Unified SSL Benchmark) framework [40]. |
| Hardware Specification | Yes | 2Running time is tested on NVIDIA RTX 4090 GPUs. |
| Software Dependencies | No | The paper mentions software like 'Adam W [28]' and the 'USB (Unified SSL Benchmark) framework [40]', but it does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | We use Adam W [28] as the default optimizer, where the learning rate is set as 5e-4 for CIFAR-10/100, 1e-4 for STL-10 [40]. The total training iterations K are set as 204,800 for all datasets. The training batch size is set as 8 for all datasets. For all methods, we apply temperature scaling for probability calibration by default, and the scaling factor is set as 0.5. For Fix Match, Dash, Flex Match and Ada Match, τ is set as 0.95. For UDA, τ is set as 0.8, temperature scaling factor is set as 0.4, according to the recommendation of the original paper. For Dash, γ is set as 1.27, C is set as 1.0001, ρ is set as 0.05, warm-up iteration is set as 5120 iterations. For Instan T, base τ is set as 0.9 by default. |