Repetitive Reprediction Deep Decipher for Semi-Supervised Learning

Authors: Guo-Hua Wang, Jianxin Wu6170-6177

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, the proposed R2-D2 method is tested on the large-scale Image Net dataset and outperforms state-of-the-art methods by 5 percentage points. Experiments In this section, we use four datasets to evaluate our algorithm: Image Net (Russakovsky et al. 2015), CIFAR-100 (Krizhevsky and Hinton 2009), CIFAR-10 (Krizhevsky and Hinton 2009), SVHN (Netzer et al. 2011). We first use an ablation study to investigate the impact of the R2 strategy. We then report the results on these datasets to compare with state-of-the-arts.
Researcher Affiliation Academia National Key Laboratory for Novel Software Technology Nanjing University Nanjing, China
Pseudocode No The paper describes "The overall R2-D2 algorithm" in paragraph form, but does not provide a formally structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Image Net (Russakovsky et al. 2015), CIFAR-100 (Krizhevsky and Hinton 2009), CIFAR-10 (Krizhevsky and Hinton 2009), SVHN (Netzer et al. 2011).
Dataset Splits Yes Following the prior work (Qiao et al. 2018; Sajjadi, Javanmardi, and Tasdizen 2016; Pu et al. 2016; Tarvainen and Valpola 2017), we uniformly choose 10% data from training images as labeled data. That means there are 128 labeled data for each category. The rest of training images are considered as unlabeled data. We test our model on the validation set. CIFAR-100... we use 10000 images (100 per class) as labeled data and the rest 40000 as unlabeled data.
Hardware Specification Yes All experiments were implemented using the Py Torch framework and run on a computer with TITAN Xp GPU.
Software Dependencies No The paper mentions using the "Py Torch framework" but does not specify a version number for PyTorch or any other software library.
Experiment Setup Yes We set α = 0.1, β = 0.03 and λ = 4000 on all datasets, which shows the robustness of our method to these hyperparameters. Other hyperparameters (e.g., batch size, learning rate, and weight decay) were set according to different datasets. The training can be divided into three stages.