Multi-view Clustering via Late Fusion Alignment Maximization
Authors: Siwei Wang, Xinwang Liu, En Zhu, Chang Tang, Jiyuan Liu, Jingtao Hu, Jingyuan Xia, Jianping Yin
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five multiview benchmark datasets demonstrate the effectiveness and efficiency of the proposed MVC-LFA. |
| Researcher Affiliation | Academia | 1College of Computer, National University of Defense Technology, Changsha, China 2School of Computer Science, China University of Geosciences, Wuhan, China 3Department of Electric and Electronic Engineering, Imperial College London 4School of Cyberspace Science, Dongguan University of Technology, Guangdong 523808, China |
| Pseudocode | Yes | Algorithm 1 Multi-view Clustering via Late Fusion Alignment Maximization |
| Open Source Code | No | The paper does not include any explicit statement about releasing its source code or provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | The datasets used in our experiments are Oxford Flower17 and Flower1021, and Protein fold prediction2 and Columbia Consumer Video (CCV)3 and Caltech 1014. 1http://www.robots.ox.ac.uk/ vgg/data/ flowers/ 2http://mkl.ucsd.edu/dataset/ protein-fold-prediction 3http://www.ee.columbia.edu/ln/dvmm/CCV/ 4http://www.vision.caltech.edu /Image Datasets/Caltech101 |
| Dataset Splits | No | The paper mentions the datasets used but does not specify exact train/validation/test splits, percentages, or sample counts needed to reproduce the data partitioning. It only states, "For all data sets, it is assumed that the true number of clusters is known and set as the true number of classes.". |
| Hardware Specification | Yes | All the experiments are performed on a desktop with Intel core i7-5820k CPU and 16G RAM. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or solvers used in the experiments. It only generally refers to algorithms like "kernel k-means". |
| Experiment Setup | Yes | For the proposed algorithm, the trade-off parameter λ is chosen from 2 15, 2 14, , 215 by grid search. For all algorithms, we repeat each experiment for 50 times with random initialization to reduce the effectiveness of randomness caused by k-means, and report the best result. |