Common-Individual Semantic Fusion for Multi-View Multi-Label Learning

Authors: Gengyu Lyu, Weiqi Kang, Haobo Wang, Zheng Li, Zhen Yang, Songhe Feng

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various data sets have verified the superiority of our method. (Abstract) and To evaluate the performance of our proposed CISF method, we implement experiments on seven widely-used MVML data sets, including Emotions, Scene, Corel5k, Espgame, Pascal, Iaprtc12 and Mirflickr data sets. (Section 4.1)
Researcher Affiliation Academia Faculty of Information Technology, Beijing University of Technology, School of Software Technology, Zhejiang University, School of Computer Science and Technology, Beijing Jiaotong University
Pseudocode Yes Algorithm 1 The Training Process of CISF
Open Source Code Yes The codes and data sets are provided in https://gengyulyu.github.io/homepage/.
Open Datasets Yes To evaluate the performance of our proposed CISF method, we implement experiments on seven widely-used MVML data sets, including Emotions, Scene, Corel5k, Espgame, Pascal, Iaprtc12 and Mirflickr data sets. (Section 4.1) and The codes and data sets are provided in https://gengyulyu.github.io/homepage/. (Section 4.1)
Dataset Splits Yes For each dataset, we randomly select 70% examples for training, 10% examples for parameter tuning and 20% examples for evaluation, where each algorithm is run 5 times independently. (Section 4.1)
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments are provided in the paper.
Software Dependencies No No specific software dependencies with version numbers are mentioned in the paper.
Experiment Setup Yes The parameter analysis of CISF with respect to its four employed parameters α, β, γ and η. ... we select the optimal values of them from {10 3, 10 2, . . . , 102} and {0.01, 0.05, . . . , 10}, respectively. Meanwhile, other parameters often follow the optimal configurations β = 0.1 and η = 100 but vary with minor adjustments on different data sets. In addition, in our experiments, the value of λmax is set to 1e6 and the maximum iterations Imax is set to 50. (Section 5.2)