OpenMatch: Open-Set Semi-supervised Learning with Open-set Consistency Regularization

Authors: Kuniaki Saito, Donghyun Kim, Kate Saenko

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the efficacy of Open Match on several SSL image classification benchmarks. Specifically, we perform experiments with varying amounts of labeled data and varying numbers of known/unknown classes on CIFAR10/100 [21] and Image Net [9]. Tables 1 and 2 describe the error rate on inliers and AUROC values respectively.
Researcher Affiliation Collaboration Kuniaki Saito1 Donghyun Kim1 Kate Saenko1,2 1Boston University 2MIT-IBM Watson AI Lab
Pseudocode Yes Algorithm 1 Open Match Algorithm.
Open Source Code Yes The code is available at https://github.com/VisionLearningGroup/OP_Match.
Open Datasets Yes Specifically, we perform experiments with varying amounts of labeled data and varying numbers of known/unknown classes on CIFAR10/100 [21] and Image Net [9].
Dataset Splits No The hyper-parameters are set by tuning on a validation set that contains a small number of labeled samples. Note that the validation set does not contain any outliers. A complete list of hyper-parameters is reported in the appendix.
Hardware Specification Yes Each experiment is done with a single 12-GB GPU, such as an NVIDIA Titan X.
Software Dependencies No The paper mentions models and frameworks used (e.g., 'Fix Match', 'Res Net-18', 'Sim CLR') but does not specify versions of programming languages or software libraries like Python, PyTorch, or TensorFlow.
Experiment Setup Yes Note that we use an identical set of hyper-parameters except for λoc, which is tuned on each dataset. λem is set 0.1 in all experiments. λfm is set to 0 before Efix epochs and then set to 1 for all experiments. Efix is set to 10 in all experiments. The hyper-parameters for Fix Match, e.g., data augmentation, confidence threshold, are fixed across all experiments for simplicity.