Multi-View Multi-Label Learning with View-Specific Information Extraction

Authors: Xuan Wu, Qing-Guo Chen, Yao Hu, Dengbao Wang, Xiaodong Chang, Xiaobo Wang, Min-Ling Zhang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world data sets clearly show the favorable performance of SIMM against other state-of-the-art multi-view multi-label learning approaches.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Alibaba Group, Hangzhou, China 3Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China 4College of Computer and Information Science, Southwest University, Chongqing 400715, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. Figure 1 shows a general flowchart, but it is not pseudocode.
Open Source Code No The paper does not explicitly state that source code for the described methodology is provided, nor does it include any links to a code repository or mention code in supplementary materials.
Open Datasets Yes A total of eight multi-view multi-label data sets are employed for performance evaluation including six benchmark data sets2 and two real-world video annotation data sets. 2Publicly available at http://mulan.sourceforge.net and http://lear.inrialpes.fr/people/guillaumin/data.php
Dataset Splits Yes For each data set, ten-fold cross-validation is performed where the mean metrics results and standard deviations are recorded for all comparing approaches.
Hardware Specification No The paper does not provide any specific details regarding the hardware used to run its experiments, such as CPU/GPU models or memory specifications.
Software Dependencies No The paper mentions using the 'Adam' optimization algorithm, but does not specify any software libraries or dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed for replication.
Experiment Setup Yes For SIMM, in order to make the model more elegant and lightweight, we set each module to be only a fully connected layer without hidden layer. l is fixed to 64. In light of comparison to COMMON, α is fixed to 1. β is selected from {0.1, 0.01, 0.001, 0.0001}.