Class-Independent Regularization for Learning with Noisy Labels
Authors: Rumeng Yi, Dayan Guan, Yaping Huang, Shijian Lu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that CIR achieves superior performance consistently across multiple benchmarks with both synthetic and real images. |
| Researcher Affiliation | Academia | 1Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing Jiaotong University, China 2Mohamed bin Zayed University of Artificial Intelligence, UAE 3School of Computer Science and Engineering, Nanyang Technological University, Singapore |
| Pseudocode | No | The paper describes the proposed methods using text and equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/Rumeng Yi/CIR. |
| Open Datasets | Yes | We extensively evaluate our approach on CIFAR-10, CIFAR-100 (Krizhevsky, Hinton et al. 2009), Clothing1M (Xiao et al. 2015) and Food101N (Lee et al. 2018) datasets. |
| Dataset Splits | No | Both CIFAR-10 and CIFAR-100 contain 50K training images and 10K test images of size 32 32, which involve 10 classes and 100 classes, respectively. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions using SGD and specific network architectures (Pre Act Res Net, Res Net-50) but does not list specific software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x). |
| Experiment Setup | Yes | For experiments on CIFAR datasets, following previous work (Li, Socher, and Hoi 2020), we use an 18-layer Pre Act Res Net architecture (He et al. 2016) and train it using SGD with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. The network is trained for 300 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 100 after 150 epoch. The warm-up epochs are set to 10 for CIFAR-10 and 30 for CIFAR-100. |