Coupled-View Deep Classifier Learning from Multiple Noisy Annotators
Authors: Shikun Li, Shiming Ge, Yingying Hua, Chunhui Zhang, Hao Wen, Tengfei Liu, Weiqiang Wang4667-4674
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and real data demonstrate the effectiveness and robustness of the proposed approach. |
| Researcher Affiliation | Collaboration | 1Institute of Information Engineering, Chinese Academy of Sciences, China 2School of Cyber Security, University of Chinese Academy of Sciences, China 3Cloud Walk Technology Co., Ltd, China 4Ant Financial Services Group |
| Pseudocode | Yes | Algorithm 1 Coupled-view Classifier Learning |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use MNIST and CIFAR10 to generate datasets with noisy labels. MNIST is a handwritten digital dataset, which has a training set of 60K instances, and a test set of 10K instances. The CIFAR-10 is an image classification dataset that consists of 60K 32 32 colour images in 10 classes, with 6K images per class. There are 50K training images and 10K test images. Like (Yi and Wu 2019; Han et al. 2018), we retain 10% of the training instances for validation, and corrupt these datasets manually by the noise transition matrix Q... |
| Dataset Splits | Yes | Like (Yi and Wu 2019; Han et al. 2018), we retain 10% of the training instances for validation, and corrupt these datasets manually by the noise transition matrix Q... |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'CNN', but does not provide specific version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | For noisy MNIST-i (i=1,2,3) dataset, the learning rate is 1e-3, λd = 0 and βd = 1.05. For noisy CIFAR10-i (i=1,2,3), the learning rate is from 1e-3 down to 7e-6 linearly, λd = 1e 4 and βd = 1.2. For all datasets, the batch size is 384, βl = 1.1, ρ is from 2 down to 0.5 linearly. We use Adam optimizer to train 200 epochs for all models. |