Robust Contrastive Multi-view Clustering against Dual Noisy Correspondence
Authors: Ruiming Guo, Mouxing Yang, Yijie Lin, Xi Peng, Peng Hu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on five widely-used multi-view benchmarks, in comparison with eight competitive multi-view clustering methods, verify the effectiveness of our method in addressing the DNC problem. |
| Researcher Affiliation | Academia | Ruiming Guo1 , Mouxing Yang1 , Yijie Lin1, Xi Peng1,2, Peng Hu1 1College of Computer Science, Sichuan University, China 2State Key Laboratory of Hydraulics and Mountain River Engineering, Sichuan University, China |
| Pseudocode | No | The paper does not contain a pseudocode block or clearly labeled algorithm. |
| Open Source Code | Yes | The code is available at https://github.com/XLearning-SCU/2024-NeurIPS-CANDY. |
| Open Datasets | Yes | The experiments are carried out on the following five widely-used multi-view learning datasets. Scene-15 [42]... Caltech-101 [43]... Land Use-21 [45]... Reuters [47]... NUS-WIDE [49]. |
| Dataset Splits | No | The paper does not specify explicit train/validation/test dataset splits. It states: 'Since Mv C requires training and clustering on the same dataset, we conduct the view realignment strategy on the learned representation by following the PVP studies [20, 21].' |
| Hardware Specification | Yes | All evaluations are conducted on Ubuntu 20.04 OS with NVIDIA 3090 GPUs. |
| Software Dependencies | Yes | In the experiment, CANDY is implemented with PyTorch 2.1.2 |
| Experiment Setup | Yes | the model is optimized with the Adam [41] optimizer with a learning rate of 0.002 across all experiments, with a batch size fixed to 1024. ... The scale parameter σ in Eq. 3 is fixed as 0.07 across all experiments. ... η being a denoising hyper-parameter fixed as 0.2 in our experiments. ... λ is fixed as 0.2 in our experiments. ... for the first 20 epochs of training. |