Binary Decomposition: A Problem Transformation Perspective for Open-Set Semi-Supervised Learning
Authors: Jun-Yi Hang, Min-Ling Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on diversified benchmarks clearly validate the superiority of BDMatch as well as the effectiveness of our binary decomposition strategy. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China. Correspondence to: Min-Ling Zhang <zhangml@seu.edu.cn>. |
| Pseudocode | No | The paper describes the BDMatch approach using textual explanations and mathematical equations, but it does not include any pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | 2Code package of BDMatch is publicly available at: http: //palm.seu.edu.cn/zhangml/files/BDMatch.rar. |
| Open Datasets | Yes | CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Image Net (Deng et al., 2009) with different numbers of labeled data and diversified open-set settings. |
| Dataset Splits | No | The paper defines a 'labeled set' and an 'unlabeled set' for training, and an 'evaluation set' for testing, but it does not specify a separate 'validation set' or a distinct validation split for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not specify any particular hardware used for experiments, such as GPU or CPU models, only mentioning model architectures like Wide Res Net-28-2 and Res Net-18. |
| Software Dependencies | No | The paper mentions using a 'unified codebase based on USB (Wang et al., 2022b)' and 'SGD optimizer' but does not provide specific version numbers for programming languages, libraries, or other software dependencies. |
| Experiment Setup | Yes | The scaling parameter τ controlling the strength of balance is set as 0.5 and the momentum factor µ in Eq.(10) is set as 0.999. Models are all trained by the SGD optimizer and the learning rate is set as 0.03 with a cosine decay. For CIFAR experiments, models are trained for 256 * 1024 iterations and each iteration contains a batch of 64 labeled samples and 64 * 7 unlabeled samples. For Image Net experiments, models are trained for 100 * 1024 iterations and each iteration contains a batch of 32 labeled samples and 32 unlabeled samples. The threshold ρ is set to 0.99 in this paper. |