Decoupled Contrastive Multi-View Clustering with High-Order Random Walks
Authors: Yiding Lu, Yijie Lin, Mouxing Yang, Dezhong Peng, Peng Hu, Xi Peng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To verify the efficacy of DIVIDE, we carry out extensive experiments on four benchmark datasets comparing with nine state-of-the-art Mv C methods in both complete and incomplete Mv C settings. |
| Researcher Affiliation | Academia | College of Computer Science, Sichuan University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is released on https://github.com/XLearning SCU/2024-AAAI-DIVIDE. |
| Open Datasets | Yes | Scene-15 (Li and Perona 2005) contains 4,485 images of 15 scene categories. Caltech-101 (Li et al. 2015) consists of 8,677 images of objects from 101 classes. Reuters (Amini, Usunier, and Goutte 2009) is a multilingual news dataset with 18,758 samples from various languages. Land Use-21 (Yang and Newsam 2010) contains 2,100 satellite images from 21 classes. |
| Dataset Splits | Yes | Specifically, we randomly select m = η n samples and remove one view from each to simulate incomplete data, where η is the missing rate and n is the total number of samples. To warm up, we set the target T in Eq. (1) as an identity matrix In for the first 100 epochs and then adopt the rectified target Eq. (6) in the remaining epochs. |
| Hardware Specification | Yes | We implement our method in Py Torch 1.13.0 and run all experiments on NVIDIA 3090 GPUs in Ubuntu 20.04 OS. |
| Software Dependencies | Yes | We implement our method in Py Torch 1.13.0 and run all experiments on NVIDIA 3090 GPUs in Ubuntu 20.04 OS. |
| Experiment Setup | Yes | We train our model for 200 epochs using the Adam optimizer without weight decay. The initial learning rate is set to 2 10 3, and the batch size is fixed to 1024. To warm up, we set the target T in Eq. (1) as an identity matrix In for the first 100 epochs and then adopt the rectified target Eq. (6) in the remaining epochs. For the other hyper-parameters, we fix the contrastive temperature τ = 0.5, the temperature of the kernel function σ = 0.1, the walking step t = 5, and the rectified weight α = 0.5 throughout experiments. |