Semi-Supervised Multi-Label Learning with Incomplete Labels
Authors: Feipeng Zhao, Yuhong Guo
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficacy of the proposed approach is demonstrated on multiple multi-label datasets, comparing to related methods that handle incomplete labels. |
| Researcher Affiliation | Academia | Feipeng Zhao and Yuhong Guo Department of Computer and Information Sciences Temple University, Philadelphia, PA 19122, USA {feipeng.zhao, yuhong}@temple.edu |
| Pseudocode | Yes | Algorithm 1 Fast Proximal Gradient with Continuation |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the proposed methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conducted experiments with six multi-label datasets: corel5k, msrc, mirflickr, mediamill, tmc2007 and yeast. Msrc is a Microsoft Research labeled image dataset... Corel5k [Duygulu et al., 2002]... Mirflickr [Huiskes and Lew, 2008]... Mediamill [Snoek et al., 2006]... Tmc2007 dataset [Srivastava and Zane-Ulman, 2002]... Yeast dataset [Elisseeff and Weston, 2002]... |
| Dataset Splits | Yes | For msrc, we randomly selected 80% of the data for training (30% labeled and 50% unlabeled) and used the rest 20% for testing. For all the other five datasets, we randomly selected 500 instances as labeled data and 1000 instances as unlabeled data, and used the remaining data for testing. For all the methods, we conducted parameter selection by performing 5-fold cross-validation on the training set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions using 'libsvm' for baseline implementations but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | For our proposed approach, we selected the trade-off parameters γA and γI from {10 7, 10 6, 10 5, 10 4, 10 3}, and selected µ from {10 5, 10 4, 10 3, 10 2, 10 1}. ... We used RBF kernels as the input kernel k( , ), and set γO=0.5 to compute the matrix-valued kernel. We used 5 nearest neighbor graph to construct the Laplacian matrix on the input data, and set the number of nearest neighbors approximately as 30% of the label dimension size to construct the Laplacian matrix on the output label matrix. |