Multi-Label Manifold Learning

Authors: Peng Hou, Xin Geng, Min-Ling Zhang

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that the performance of multi-label learning can be improved significantly with the label manifold.
Researcher Affiliation Academia Peng Hou, Xin Geng , Min-Ling Zhang MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {hpeng, xgeng, zhangml}@seu.edu.cn
Pseudocode No Section 3, 'The ML2 Algorithm', describes the method in prose and mathematical equations but does not present a structured pseudocode or algorithm block.
Open Source Code No No explicit statement regarding the release of source code for the methodology, nor any repository links, were found in the paper.
Open Datasets Yes Table 1: Characteristics of the bench mark multi-label data sets. Data set ... cal500 ... audio ... enron ... image ... scene ... yeast ... slashdot ... corel5k ... rcv1-s1 ... rcv1-s2 ... bibtex ... corel16k-s1 ... corel16k-s2 ... tmc2007
Dataset Splits No On each data set, 50% examples are randomly sampled without replacement to form the training set, and the rest 50% examples are used to form the test set.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments were provided in the paper.
Software Dependencies No No specific ancillary software details, such as programming languages, libraries, or solver names with version numbers, were provided for the experiment implementation.
Experiment Setup Yes The number of neighbors K for ML2 is set to q + 1, because it is necessary that K is larger than q to generate a q-dimensional space using K vectors. The parameters λ, C1 and C2 are set to 1, 1 and 10, respectively. The ensemble size for RAKEL is set to 2q with k = 3.