InfoMatch: Entropy Neural Estimation for Semi-Supervised Image Classification
Authors: Qi Han, Zhibo Tian, Chengwei Xia, Kun Zhan
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we show its superior performance. ... We evaluate Info Match on well-known benchmark datasets, including CIFAR-10/100 [Krizhevsky and Hinton, 2009], SVHN [Netzer et al., 2011], STL-10 [Coates et al., 2011], and Image Net [Deng et al., 2009]. |
| Researcher Affiliation | Academia | Qi Han, Zhibo Tian, Chengwei Xia, Kun Zhan School of Information Science and Engineering, Lanzhou University kzhan@lzu.edu.cn |
| Pseudocode | Yes | Algorithm 1 The Info Match algorithm. |
| Open Source Code | Yes | The source code is available at https://github.com/kunzhan/Info Match. |
| Open Datasets | Yes | We evaluate Info Match on well-known benchmark datasets, including CIFAR-10/100 [Krizhevsky and Hinton, 2009], SVHN [Netzer et al., 2011], STL-10 [Coates et al., 2011], and Image Net [Deng et al., 2009]. |
| Dataset Splits | No | The paper mentions 'Info Match performance is then evaluated using the EMA with a parameter of 0.999' and 'self-adaptive thresholding method' but does not specify explicit validation dataset splits (e.g., percentages or counts for a dedicated validation set). |
| Hardware Specification | No | The paper mentions using specific model architectures like Res Net-50 and Wide Res Net variants, but it does not provide any specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | Specifically, we employ standard stochastic gradient descent algorithm with cosine learning rate decay as the optimizer across all datasets, with an initial learning rate of 0.03 and a momentum of 0.9. For all experiments, we set the total number of iterations to 220. ... for Image Net, we maintain a batch size of 128 for both labeled and unlabeled samples, i.e., nb l = nb u = 128 ... For other datasets, we adjust the batch sizes to nb l = 64 and nb u = 448... Subsequently, we adjust the parameter λ that regulates the entropy bounds to 0.002. |