Learning from Crowds by Modeling Common Confusions
Authors: Zhendong Chu, Jing Ma, Hongning Wang5832-5840
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthesized and real-world benchmarks demonstrate the effectiveness of our proposed common noise adaptation solution. |
| Researcher Affiliation | Academia | Zhendong Chu, Jing Ma, Hongning Wang Department of Computer Science, University of Virginia {zc9uy, jm3mr, hw5x}@virginia.edu |
| Pseudocode | No | The paper describes its model and learning framework but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The CIFAR-10 dataset is generated based on the CIFAR-10 image classification dataset (Krizhevsky, Hinton et al. 2009). [...] Label Me (Rodrigues and Pereira 2018; Russell et al. 2008) is an image classification dataset [...] Music (Rodrigues, Pereira, and Ribeiro 2014) is a music genre classification dataset |
| Dataset Splits | Yes | On the Synthetic dataset, we completely synthesized everything. [...] a 8,000-instance training set, a 1,000-instance validation set and a 1,000-instance testing set. The CIFAR-10 dataset [...] split into a 40,000-instance training set, a 10,000-instance validation set and a 10,000-instance testing set. |
| Hardware Specification | Yes | We implement our framework with Py Torch, and run it on a Cent OS system with one NVIDIA 2080Ti GPU with 10 GB memory. |
| Software Dependencies | No | The paper mentions "Py Torch" and "Adam optimizer (Kingma and Ba 2014)" but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We trained the network using the Adam optimizer (Kingma and Ba 2014) with default parameters and learning rate searched from {0.02, 0.01, 0.005}. The dimension of annotator and instance embedding is chosen from {20, 40, 60, 80}. The regularization term λ is searched from {10-4, 10-5, 10-6}. |