USDNL: Uncertainty-Based Single Dropout in Noisy Label Learning
Authors: Yuanzhuo Xu, Xiaoguang Niu, Jie Yang, Steve Drew, Jiayu Zhou, Ruizhi Chen
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical results on both synthetic and real-world datasets show that USDNL outperforms other methods. Our code is available at https: //github.com/kovelxyz/USDNL. The Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23) |
| Researcher Affiliation | Academia | Yuanzhuo Xu1, Xiaoguang Niu1,4*, Jie Yang1, Steve Drew2, Jiayu Zhou3, Ruizhi Chen4 1School of Computer Science, Wuhan University, China 2 Department of Electrical and Software Engineering, University of Calgary, Canada 3 Department of Computer Science and Engineering, Michigan State University, USA 4 LIESMARS, Wuhan University, China |
| Pseudocode | Yes | Algorithm 1: The training pipeline of USDNL |
| Open Source Code | Yes | Our code is available at https: //github.com/kovelxyz/USDNL. |
| Open Datasets | Yes | We verify the effectiveness of USDNL on four manually corrupted datasets, i.e., MNIST (Le Cun 1998), CIFAR-10, CIFAR-100 (Krizhevsky 2009) with artificial corruption, along with a real-world noisy dataset Clothing1M (Xiao et al. 2015). |
| Dataset Splits | No | The paper does not explicitly provide specific train/validation/test dataset splits (e.g., percentages or sample counts). It only mentions shuffling the training set. |
| Hardware Specification | Yes | To compare algorithm complexity, we run each algorithm for 5 epochs on the RTX-2080Ti platform and count the mean and standard deviation of a single epoch run. |
| Software Dependencies | No | The paper mentions using specific network architectures (Le Net, 9-CNN, Resnet-18) and a unified dropout rate of 0.25, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Network setting For a fair comparison, different schemes use the same network structure on the same dataset. Specifically, we use a Le Net for MNIST, a 9-CNN network for CIFAR-10 and CIFAR-100, and non-pretrained Resnet-18 for Clothing1M. USDNL adds several dropout layers to the standard network. Without complex parameter tuning on different datasets like other methods, we incorporate a unified dropout rate of 0.25. |