OwMatch: Conditional Self-Labeling with Consistency for Open-World Semi-Supervised Learning

Authors: Shengjie Niu, Lifan Lin, Jian Huang, Chao Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive empirical analyses demonstrate that our method yields substantial performance enhancements across both known and unknown classes in comparison to previous studies. Comprehensive empirical analyses demonstrate that our method yields substantial performance enhancements across both known and unknown classes in comparison to previous studies. Section 5 Experiments This section presents a comprehensive evaluation of our approach. It includes experimental results and in-depth analysis, demonstrating the effectiveness of our approach.
Researcher Affiliation Academia 1Hong Kong Polytechnic University, 2Southern University of Science and Technology shengjie.niu@connect.polyu.hk, 12012816@mail.sustech.edu.cn, j.huang@polyu.edu.hk, wangc6@sustech.edu.cn
Pseudocode No The paper does not contain any figures, blocks, or sections explicitly labeled 'Pseudocode', 'Algorithm', or 'Algorithm X', nor does it present structured steps for a method formatted like code or an algorithm.
Open Source Code Yes Code is available at https://github.com/niusj03/Ow Match.
Open Datasets Yes Datasets. We evaluate our approach on CIFAR-10/100 [25], Image Net-100 [11] and Tiny Image Net [28]. A detailed description of these datasets is provided in Appendix A. Table 9: Details of evaluation benchmarks, we show the number of classes, dataset statistics, selected backbone, and batch size for training. Dataset Num. Class Train Samples Test Samples Backbone Batch size
Dataset Splits No The paper describes training data and labeled/unlabeled data, but does not explicitly mention a dedicated validation dataset split (e.g., for hyperparameter tuning) or its size/proportion.
Hardware Specification Yes All experiments are carried out on NVIDIA s Tesla A100 GPU with 40G memory.
Software Dependencies Yes The foundational algorithm of our study is constructed utilizing Python 3.8 and Py Torch 2.1 [31].
Experiment Setup Yes Implementation details. For a fair comparison, we apply Res Net-50 [22] as the backbone model for Image Net-100 and Res Net-18 for other benchmarks. We train the model with a batch size of 256 for Tiny Image Net and 512 for other benchmarks. We jointly optimize backbone and prototype parameters using the standard Stochastic Gradient Descent (SGD) with momentum. We apply the cosine annealing learning rate schedule for all experiments. Hyperparameters Here, we provide a comprehensive list of hyperparameters in Table 11.