Unbiased Multi-Label Learning from Crowdsourced Annotations

Authors: Mingxuan Xia, Zenan Huang, Runze Wu, Gengyu Lyu, Junbo Zhao, Gang Chen, Haobo Wang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various CMLL scenarios demonstrate the effectiveness of our proposed method.
Researcher Affiliation Collaboration 1School of Software Technology, Zhejiang University, Ningbo, China 2State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China 3Fuxi AI Lab, Net Ease Inc., Hangzhou, China 4Faculty of Information Technology, Beijing University of Technology, Beijing, China.
Pseudocode Yes Algorithm 1 Pseudo-code of CLEAR. Algorithm 2 Pseudo-code of Transition Matrix Estimation.
Open Source Code Yes The source code is available at https: //github.com/Mingxuan Xia/CLEAR.
Open Datasets Yes We conduct our experiments on five benchmark multi-label image datasets1, including Image, Scene, Corel5K, Mirflickr, NUS-WIDE. For these datasets, we corrupt the training sets according to true transition matrices {T mk}M,K m=1,k=1. 1http://mulan.sourceforge.net/datasets-mlc.html
Dataset Splits Yes For all the experiments, we perform ten-fold cross-validation and report the mean as well as the standard deviation for metric values.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions using 'Adam optimizer' and 'BCE loss' and indicates that models are trained with 'neural networks,' implying use of a deep learning framework like PyTorch. However, it does not specify exact version numbers for any software libraries, programming languages (e.g., Python), or frameworks.
Experiment Setup Yes The encoder and decoder for CLEAR are parameterized as three fully connected layer neural networks with hidden sizes 512 and 256. ... we train the models with Adam optimizer ... with a learning rate of 7.5 10 4 and a weight decay of 1e 5. ... the confident-sample number C and the momentum parameter η are fixed as 20 and 0.9 for all settings. The Gaussian subspace dimensionality d is set as 100 for Corel5K and NUS-WIDE, and 50 otherwise. The trade-off parameters α and β are set by 1.0 and 1.1 by default.