Contrastive and View-Interaction Structure Learning for Multi-view Clustering
Authors: Jing Wang, Songhe Feng
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on six benchmarks illustrate the superiority of our method compared to other state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1Key Laboratory of Big Data & Artificial Intelligence in Transportation (Beijing Jiaotong University), Ministry of Education 2School of Computer Science and Technology, Beijing Jiaotong University, Beijing 100044, China |
| Pseudocode | Yes | The whole learning process of SERIES is summarized in the Algorithm 1. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Datasets. The following datasets are carried out for evaluation: (1) HW [Perkins and Theiler, 2003]... (2) Reuters [Amini et al., 2009]... (3) Noisyminist [Wang et al., 2015]... (4) VOC [Hwang and Grauman, 2010]... (5) Hdigit [Chen et al., 2022]... (6) Mfeat [Wang et al., 2019]... |
| Dataset Splits | No | The paper describes training and fine-tuning epochs but does not specify any explicit training/validation/test dataset splits, sample counts for each split, or cross-validation setup. |
| Hardware Specification | Yes | All experiments are conducted on a Linux platform utilizing an Intel(R) Core(TM) i9-11900 2.50GHz CPU, 64GB RAM, and Ge Force RTX 3090 Ti GPU. |
| Software Dependencies | No | The paper mentions software components like 'Re LU' (activation function) and 'Adam' (optimizer), but does not provide specific version numbers for these or any other software libraries or frameworks used. |
| Experiment Setup | Yes | The view-specific deep graph autoencoders are pre-trained for 200 epochs, and the entire model is fine-tuned for an additional 100 epochs. The dimensions of the encoders, decoders, and the cross dual relation generation layer are set to {dv, 512, 2048, 256}, {256, 2048, 512, dv} and {256, dv} respectively. The activation function is specified as Re LU. In our study, the trade-off hyperparameters λ1, λ2 are selected from the range {0.1, 0.2, . . . , 0.9, 1.0}. |