Incomplete Contrastive Multi-View Clustering with High-Confidence Guiding
Authors: Guoqing Chao, Yi Jiang, Dianhui Chu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments compared with state-of-the-art approaches demonstrated the effectiveness and superiority of our method. |
| Researcher Affiliation | Academia | Guoqing Chao, Yi Jiang, Dianhui Chu Harbin Institute of Technology, 2 West Culture Road, Weihai, Shandong 264209, China guoqingchao10@gmail.com, jiangyijcx@163.com, chudh@hit.edu.cn |
| Pseudocode | Yes | Algorithm 1 Optimization of the proposed ICMVC |
| Open Source Code | Yes | Our code is publicly available at https://github.com/liunian-Jay/ICMVC. |
| Open Datasets | Yes | We used four commonly-used datasets in our experiments to evaluate our model. Scene-15: It consists of 4,485 images distributed in 15 scene categories with GIST and LBP features as two views. Land Use-21: It consists of 2100 satellite images from 21 categories with two views: PHOG and LBP. MSRC-V1: It is an image dataset consisting of 210 images in seven categories, including trees, buildings, airplanes, cows, faces, cars, and bicycles, with GIST and HOG features as two views. Noisy MNIST: the original images are used as view 1, and the sampled intra-class images with Gaussian white noise are used as view 2, and we use its subset containing 10k samples in the experiments. |
| Dataset Splits | No | The paper does not explicitly provide details about training/validation/test dataset splits. It mentions varying missing rates for evaluation but not how the datasets themselves were partitioned for training or validation purposes. |
| Hardware Specification | Yes | We implement ICMVC in Py Torch 1.12.1 and conduct all the experiments on Ubuntu 20.04 with NVIDIA 2080Ti GPU. |
| Software Dependencies | Yes | We implement ICMVC in Py Torch 1.12.1 and conduct all the experiments on Ubuntu 20.04 with NVIDIA 2080Ti GPU. |
| Experiment Setup | Yes | The Adam optimizer is adopted, and the learning rate is set to 0.001, the hyper-parameter K is set to 10. The instance-level temperature parameter τI is fixed at 1.0, and the cluster-level parameter τC is fixed at 0.5. We observe that it can fully converges after 500 epoches after training the network, thus 500 epoches is set to terminate. |