Safeguarded Dynamic Label Regression for Noisy Supervision
Authors: Jiangchao Yao, Hao Wu, Ya Zhang, Ivor W. Tsang, Jun Sun9103-9110
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted on controllable noise data with CIFAR10 and CIFAR-100 datasets, and the agnostic noise data with Clothing1M and Web Vision17 datasets. Experimental results have demonstrated that the proposed model outperforms several state-of-the-art methods. |
| Researcher Affiliation | Academia | Cooperative Medianet Innovation Center, Shanghai Jiao Tong University Center for Artificial Intelligence, University of Technology Sydney {sunarker, howiethepeanut, ya zhang, junsun}@sjtu.edu.cn ivor.tsang@uts.edu.au |
| Pseudocode | Yes | Algorithm 1 Dynamic Label Regression for LCCN |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or provide links to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on CIFAR-10, CIFAR100, Clothing1M and Web Vision datasets. CIFAR-10 and CIFAR-100 (Krizhevsky and Hinton 2009)... Clothing1M (Xiao et al. 2015) dataset... Web Vision2 (Li et al. 2017a) |
| Dataset Splits | Yes | Clothing1M (Xiao et al. 2015) dataset has 1 million images of clothes collected from shopping websites. It has 14 predefined classes and the labels of images are roughly specified based on the surrounding text of images provided by the sellers, thus are very noisy. According to (Xiao et al. 2015), about 61.54% labels are reliable. Besides, this dataset has additional 50k, 14k and 10k clean data respectively for training, validation and test. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions optimizers (SGD) and model architectures (ResNet) but does not provide specific version numbers for any software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | For CIFAR-10 and CIFAR-100, the Pre Act Res Net-32 (He et al. 2016) is adopted as the classifier. The image data is augmented by horizontal random flip and 32 32 random crops after padding with 4 pixels. Then the per-image standardization is used to normalize pixel values. For the optimizer, we deploy SGD with a momentum of 0.9 and a weight decay of 0.0005. The batch size is set to 128. The training runs totally 120 epochs and is separated into three phases in 40 and 80 epochs. Among three phases, we respectively use the learning rates 0.5, 0.1 and 0.01. |