Teaching-to-Learn and Learning-to-Teach for Multi-label Propagation

Authors: Chen Gong, Dacheng Tao, Jie Yang, Wei Liu

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Thorough empirical studies show that due to the optimized propagation sequence designed by the teachers, ML-TLLT yields generally better performance than seven state-of-the-art methods on the typical multi-label benchmark datasets. This section first validates several critical steps in the proposed ML-TLLT, and then compares ML-TLLT with seven state-of-the-art methods on five benchmark datasets.
Researcher Affiliation Collaboration Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney Didi Research, Beijing, China
Pseudocode Yes Algorithm 1 The curvilinear search for minimizing (7) and Algorithm 2 PALM for solving S(r)-subproblem (6)
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes All the adopted datasets come from the MULAN repository. 1http://mulan.sourceforge.net/datasets-mlc.html
Dataset Splits Yes The reported results of various algorithms on all the datasets are produced by 5-fold cross validation.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes In ML-TLLT, the trade-off parameters β0 and β1 are set to 1 for all the experiments. As suggested by (Chen et al. 2008), we set u = 1, v = 0.15 in SMSE-HF, and β = γ = 1 in SMSE-LGC. The weighting parameter C in MLSVM (Linear) and MLSVM (RBF) is tuned to 1.