Class-Dependent Label-Noise Learning with Cycle-Consistency Regularization
Authors: De Cheng, Yixiong Ning, Nannan Wang, Xinbo Gao, Heng Yang, Yuxuan Du, Bo Han, Tongliang Liu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 3 Experiments In this section, we introduce the experiment setup, including datasets, noise types, and implementation details. We compare our proposed method with the state-of-the-art algorithms on two synthetic and two real-world noisy datasets, followed by an ablation study to analyze the experimental results and some useful hyper-parameters. |
| Researcher Affiliation | Collaboration | De Cheng1 , Yixiong Ning1 , Nannan Wang1 , Xinbo Gao2, Heng Yang3, Yuxuan Du4, Bo Han5, Tongliang Liu6 1 Xidian University, 2 Chongqing University of Posts and Telecommunications, 3 Shenzhen Ai Mall Tech. Co., Ltd., 4 JD Explore Academy, 5 Hong Kong Baptist University, 6 TML Lab, The University of Sydney. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states 'Yes' to the question 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' in the checklist, but does not provide a specific URL or explicit statement about open-sourcing the code for the methodology in the main text. |
| Open Datasets | Yes | Extensive experiments are conducted on two manually corrupted datasets with different noisy types (i.e., CIFAR-10 [9], CIFAR-100 [9]) and two real-world noisy datasets ( i.e., Clothing1M [34] and Food-101N [10]) |
| Dataset Splits | No | For CIFAR-10 and CIFAR-100, both contain 60K images of size 32 32, of which 50K images constitute the training set and 10K images for testing set. For Clothing 1M, it contains 1M images with about 38.46% noisy labels for training and 10K images with clean labels for testing. For Food-101N, it contains 310K images with about 19.66% noisy labels for training and 55K images with clean labels for testing. While the paper mentions early stopping (which implies a validation set), it does not explicitly provide the split percentages or counts for a validation set. |
| Hardware Specification | Yes | For fair comparisons, all our experiments are performed on NVIDIA Ge Force RTX 3090 |
| Software Dependencies | No | The paper states 'implemented on the same Py Torch platform', but does not specify a version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | We train the classification network f(xi; w), the transition matrices T and T by SGD strategy, with batchsize of 128, momentum 0.9, weight decay 10 3, and learning rate 10 2. For CIFAR-10, the algorithm run 60 epochs and the learning rate is divided by 10 after the 30-th epoch. For CIFAR-100, the algorithm run 80 epochs and the learning rate is divided by 10 after 30-th and 60-th epoch. |