Unsupervised Semantic Aggregation and Deformable Template Matching for Semi-Supervised Learning

Authors: Tao Han, Junyu Gao, Yuan Yuan, Qi Wang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and analysis across four standard semi-supervised learning benchmarks validate that USADTM achieves top performance (e.g., 90.46% accuracy on CIFAR-10 with 40 labels and 95.20% accuracy with 250 labels).
Researcher Affiliation Academia Tao Han , Junyu Gao , Yuan Yuan and Qi Wang School of Computer Science and Center for OPTical IMagery Analysis and Learning Northwestern Polytechnical University Xi an, Shaanxi, P.R. China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is released at https://github.com/taohan10200/USADTM.
Open Datasets Yes CIFAR-10 and CIFAR-100 [18] are large datasets of tiny RGB images with size 32x32. (...) STL-10 [38] is dataset designed for unsupervised and semi-supervised learning. (...) SVHN [48] Dateset is derived from Google Street View House Number.
Dataset Splits Yes CIFAR-10 and CIFAR-100 (...) Both sets provide 50,000 training labels and 10,000 validation labels. STL-10 (...) consists of 5,000 training labels and 8,000 valdiation labels.
Hardware Specification Yes The training and evaluation are performed on NVIDIA GTX 1080Ti GPU.
Software Dependencies No The paper mentions using a 'Py Torch version of the Fix Match framework [49]' but does not provide specific version numbers for PyTorch or other software libraries.
Experiment Setup Yes The setting of batch size for labeled data and unlabeled data follows [22]. The SGD algorithm with 0.03 initialization learning rate is adopted to optimize the network. (...) T-MI loss will not join the training until after some epochs. (...) The purpose of setting the threshold τ is to filter out part of the wrong allocation. In the absence of special instructions, τ is generally set at 0.85. (...) K = len(Xl)/C * 2. (...) α denotes the weighting parameter of mutual information loss. (...) The recommended setting is 0.1.