Understanding Partial Multi-Label Learning via Mutual Information

Authors: Xiuwen Gong, Dong Yuan, Wei Bao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on synthetic and real-world datasets clearly demonstrate the superiorities of the proposed MILI-PML.
Researcher Affiliation Collaboration Xiuwen Gong , , Dong Yuan , Wei Bao Faculty of Engineering, The University of Sydney Hunan Huishiwei Intelligent Technology Co., Ltd. {xiuwen.gong, dong.yuan, wei.bao}@sydney.edu.au
Pseudocode Yes Algorithm 1 Alternating Optimization (MILI Algorithm) Goal: Solve the Optimization Problem of Eq. (16) : identifying the ground-truth label set v as well as training a predictive model parameter w; Input: training data S = {(xi, yi) : i = 1, . . . , n}; Output: the predictive model parameter w. [...] Algorithm 2 Fix w, update v. Goal: Solve the Optimization Problem of Eq. (19). Input: training data S = {(xi, yi) : i = 1, . . . , n}; Output: the ground-truth label set vi for each instance i {1, . . . , n}.
Open Source Code No The paper does not provide any specific link or explicit statement about the availability of its source code.
Open Datasets Yes Experiments are conducted on six synthetic PML datasets1 and four real-world PML datasets (i.e. Yeast BP [16], Music-emotion [19], Music-style [19], MIRFlickr [19, 14] of different scales, the characteristics of which are summarized in the supplementary materials.
Dataset Splits Yes Besides, on each dataset, five-fold cross validation is performed where the mean metric value as well as standard deviation are recorded for each comparing method.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No For MILI-PML, we employ the simple Binary Relevance [20, 21] as the predictive classifier Q in the MILI-PML model. In addition, LIBLINEAR [22] with L2-regularized square hinge loss is also employed to train the binary classifiers in Binary Relevance (BR). While software names are mentioned, specific version numbers are not provided for reproducibility.
Experiment Setup No For all PML baselines, we set the trade-off parameters as suggested in the original papers. Details can be found in the supplementary materials. This defers the details of the experimental setup for baselines to other papers, and for MILI-PML, only a single parameter (δ = 10^-5) for the stopping criteria is explicitly mentioned, lacking comprehensive hyperparameter or training settings.