Bridging the Gap: Learning Pace Synchronization for Open-World Semi-Supervised Learning
Authors: Bo Ye, Kai Gan, Tong Wei, Min-Ling Zhang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets demonstrate that previous approaches may significantly hinder novel class learning, whereas our method strikingly balances the learning pace between seen and novel classes, achieving a remarkable 3% average accuracy increase on the Image Net dataset. |
| Researcher Affiliation | Academia | Bo Ye1,2, Kai Gan1,2, Tong Wei1,2 , Min-Ling Zhang1,2 1School of Computer Science and Engineering, Southeast University, Nanjing 211189, China 2Key Lab. of Computer Network and Information Integration (Southeast University), Mo E, China {yeb, gank, weit, zhangml}@seu.edu.cn |
| Pseudocode | No | The paper describes the proposed losses and the final objective function mathematically, but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | Yes | Our code is available at https://github.com/yebo0216best/LPS-main. |
| Open Datasets | Yes | We evaluate our method on three commonly used datasets, i.e., CIFAR-10, CIFAR-100 [Krizhevsky, 2009], and Image Net [Russakovsky et al., 2015]. |
| Dataset Splits | No | The paper specifies labeled ratios (10% or 50%) for seen classes but does not explicitly mention the use of a separate validation set or its split percentage/size for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | These experiments are conducted on a single NVIDIA 3090 GPU. |
| Software Dependencies | No | The paper mentions using Sim CLR and Rand Augment, and models like ResNet-18/50, but does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, CUDA, or specific library versions). |
| Experiment Setup | Yes | For CIFAR-10 and CIFAR-100, we utilize Res Net-18 as our backbone which is trained by the standard SGD with a momentum of 0.9 and a weight decay of 0.0005. We train the model for 200 epochs with a batch size of 512. For the Image Net dataset, we opt for Res Net-50 as our backbone. This choice also undergoes training via the standard SGD, featuring a momentum coefficient of 0.9 and a weight decay of 0.0001. The training process spans 90 epochs, with a batch size of 512. and The cosine annealing learning rate schedule is adopted on CIFAR and Image Net datasets. |