Towards Robust Multi-Label Learning against Dirty Label Noise

Authors: Yuhai Zhao, Yejiang Wang, Zhengkui Wang, Wen Shan, Miaomiao Huang, Meixia Wang, Min Huang, Xingwei Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed method outperforms the state-of-the-art methods significantly.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Northeastern University, China 2Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, China 3Info Comm Technology Cluster, Singapore Institute of Technology, Singapore 4Singapore University of Social Sciences, Singapore 5College of Information Science and Engineering, Northeastern University, China
Pseudocode Yes We summarize the key steps in Algorithm 1. Algorithm 1 NMLD
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes To evaluate NMLD s performance, we conducted experiments on ten multi-label datasets: slashdot, medical, enron, scene, yeast, 20ng, corel5k, mirflickr, eurlex dc, and m emotion [Pestian et al., 2007; Tidake and Sane, 2018]. ... EURLex-4K, which contains 15,539 instances and Mediamill containing 43,970 instances [Bhatia et al., 2016].
Dataset Splits No The paper does not explicitly provide details about training/validation/test dataset splits or mention a specific validation set.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific version numbers for software dependencies used in the experiments.
Experiment Setup Yes In our work, the dirty, excess and missing rate are set to 50%, 20% and 50%, respectively. ... We set the corresponding parameters ηi = 0 for the ablation experiments respectively.