Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Tensorized Multi-View Multi-Label Classification via Laplace Tensor Rank
Authors: Qiyu Zhong, Yi Shan, Haobo Wang, Zhen Yang, Gengyu Lyu
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate the effectiveness of TMv ML, we conducted in-depth experimental analysis on six widely-used MVML datasets, including Emotions, Yeast, Corel5k, Plant, Human and Espgame... To ensure reliable comparisons, we run each algorithm five times and record the average metric results and standard deviation in Table 2, with the best performances highlighted in bold. |
| Researcher Affiliation | Collaboration | 1College of Computer Science, Beijing University of Technology, China 2Idealism Beijing Technology Co., Ltd., Beijing, China 3School of Software Technology, Zhejiang University, Ningbo, China. Correspondence to: Gengyu Lyu <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 The Training Process of TMv ML |
| Open Source Code | No | The paper does not contain any explicit statement about providing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | To validate the effectiveness of TMv ML, we conducted in-depth experimental analysis on six widely-used MVML datasets, including Emotions, Yeast, Corel5k, Plant, Human and Espgame, which can be downloaded from Mulan website: http://mulan.sourceforge.net/datasets-mlc.html. |
| Dataset Splits | Yes | For each dataset, we randomly selected 70% data for training, 10% data for validation and 20% data for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU models, memory). |
| Software Dependencies | No | The paper does not mention any specific software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow, or other libraries with versions). |
| Experiment Setup | No | The paper discusses parameter sensitivity for α and β, providing ranges of values tested ({10-1, 100, . . . , 105} for α and {10-5, 10-4, . . . , 100} for β), but it does not explicitly state the specific hyperparameter values used for the main experimental results presented in Table 2, nor does it detail other training configurations like learning rate, optimizer, or number of epochs. |