Latent Semantic Aware Multi-View Multi-Label Classification

Authors: Changqing Zhang, Ziwei Yu, Qinghua Hu, Pengfei Zhu, Xinwang Liu, Xiaobo Wang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical results on benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art methods. Experiments Experiment Settings Datasets & features. In this section, we evaluate our LSA-MML and compare it with state-of-the-art methods on three benchmark multi-label datasets...
Researcher Affiliation Academia School of Computer Science and Technology, Tianjin University, Tianjin, China, 300350 School of Computer National University of Defense Technology Changsha, China, 410073 Center for Biometrics and Security Research & National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, 100190
Pseudocode No The paper describes an 'Alternating Optimization Algorithm' and provides update rules for variables, but it does not present a formally structured 'Pseudocode' block or 'Algorithm' figure.
Open Source Code No The paper does not contain any statement about releasing open-source code or providing a link to a code repository.
Open Datasets Yes Corel5k (Duygulu et al. 2002), ESP Game (Von Ahn and Dabbish 2005) and PASCAL VOC 07 (Everingham 2006). We employ the standard partitions for training and testing sets 1 as described in Table 1. 1lear.inrialpes.fr/people/guillaumin/data.php
Dataset Splits Yes We conduct parameter tuning on validation sets by following the same settings in (Luo et al. 2013; Liu et al. 2015). In specific, each data set is first partitioned into training and test set. Following the methods (Luo et al. 2013; Liu et al. 2015), 20% samples are then randomly selected from the test set as validation set for parameter tuning, and the rest is used for evaluating the classification performance of each algorithm. We employ the standard partitions for training and testing sets 1 as described in Table 1.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing specifications used for experiments.
Software Dependencies No The paper mentions features and methods like Dense Sift and Dense Hue, but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, etc.).
Experiment Setup Yes We select the value from {2, 3, 4, 5} for r and from {0.01, 0.1, 1, 10, 100} for β and γ. We conduct parameter tuning on validation sets by following the same settings in (Luo et al. 2013; Liu et al. 2015).