Exploiting High-Order Information in Heterogeneous Multi-Task Feature Learning
Authors: Yong Luo, Dacheng Tao, Yonggang Wen
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on two popular applications: text categorization and social image annotation. The results validate the superiority of the proposed THMTFL. In this section, we evaluate the effectiveness of the proposed THMTFL on both document categorization and image annotation. Prior to these evaluations, we present the used datasets, evaluation criteria, as well as our experimental settings. |
| Researcher Affiliation | Collaboration | School of Computer Science and Engineering, Nanyang Technological University, Singapore UBTech Sydney AI Institute and SIT, FEIT, The University of Sydney, Australia |
| Pseudocode | Yes | Algorithm 1 The improved projected gradient method for solving Um. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | The dataset used in document categorization is the Reuters multilingual collection (RMLC) [Amini et al., 2009], which contains news articles written in five languages, and from six populous categories. In image annotation, we employ a challenging natural image dataset NUS-WIDE (NUS) [Chua et al., 2009]. |
| Dataset Splits | No | The paper describes training and test splits, and mentions using 'leave-one-out cross validation on the labeled set' for hyperparameter tuning, but does not specify a distinct 'validation' dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using Linear SVMs and refers to specific algorithms like PGM and ECOC, but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | The hyper-parameters {γm} are set as the same value, and we tune γm over the set {10i|i = 5, 4, . . . , 4}. The hyper-parameter P is empirically set as 10 1.5 log C . If unspecified, the hyper-parameters are determined using leave-one-out cross validation on the labeled set. |