Multiple Noisy Label Distribution Propagation for Crowdsourcing
Authors: Hao Zhang, Liangxiao Jiang, Wenqiang Xu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Promising experimental results on simulated and real-world datasets validate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | Department of Computer Science, China University of Geosciences, Wuhan 430074, China ljiang@cug.edu.cn |
| Pseudocode | No | The information is insufficient. The paper describes the method with equations and text but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The information is insufficient. The paper does not state that the source code for the proposed MNLDP method is openly available or provide a link to it. |
| Open Datasets | Yes | six popular benchmark datasets were used by simulating multiple labelers with different levels of expertise. Table 1 provides detailed information on these datasets, which came from the University of California at Irvine (UCI) repository and To evaluate further the performance of MNLDP, we conducted our experiments on three real-world crowdsourced datasets: Leaves, Lable Me, and Music Genre, which were collected from Amazon Mechanical Turk (AMT) and are publicly available. |
| Dataset Splits | No | The information is insufficient. The paper describes the datasets and experimental repetitions but does not specify explicit training, validation, and test splits (e.g., percentages, counts, or cross-validation setup). |
| Hardware Specification | No | The information is insufficient. The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The information is insufficient. The paper mentions using the CEKA platform but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In MNLDP, we set the number of nearest neighbors k to 5 and the hyper-parameter η to 0.5. |