Partially View-aligned Clustering
Authors: Zhenyu Huang, Peng Hu, Joey Tianyi Zhou, Jiancheng Lv, Xi Peng
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show promising results of our method in clustering partially view-aligned data. |
| Researcher Affiliation | Academia | Zhenyu Huang College of Computer Science Sichuan University, China ... Peng Hu I2R A*STAR, Singapore ... Joey Tianyi Zhou i HPC A*STAR, Singapore ... Jiancheng Lv College of Computer Science Sichuan University, China ... Xi Peng College of Computer Science Sichuan University, China |
| Pseudocode | Yes | Algorithm 1 Optimization of P |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | We carry our experiments on four popular multi-view datasets including: Caltech101-20 [15, 25] which consists of 2,386 images of 20 subjects with two handcrafted features as two views. Reuters [8] which is a subset of the Reuters database. It consists of 3,000 samples from 6 classes, using German and Spanish as two views. Scene-15[5] which consists of 4,485 images distributed over 15 scene categories with two views. Pascal Sentences [7] which is selected from 2008 PASCAL development kit. It contains 1,000 images of 20 classes with corresponding text descriptions. |
| Dataset Splits | Yes | For Caltech101-20, Reuters and Scene-15, we randomly split them into two partitions ({A(v), U(v)}m v=1) with the equal size. The partition {A(v)}v remains the known correspondence and the partition {U(v)}v are randomly permuted. For the Pascal, we directly use the training set as {A(v)}v and shuffle the testing set as {U(v)}v. |
| Hardware Specification | Yes | We implement PVC in Py Torch and carry all evaluations on a standard Ubuntu-18.04 OS with an NVIDIA 2080Ti GPU. |
| Software Dependencies | No | The paper mentions 'We implement PVC in Py Torch' but does not provide specific version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | For the CCA-based methods, we fix the hidden representation dimension to 10. For BMVC, we fix the length of binary code to 128. For LMSC, we fix the latent representation dimension to 100 and seek the optimal λ from (0.01, 0.1, 1, 10). For Mv C-DMF, we seek the optimal β and γ from (0.1, 1, 10, 100) as suggested. ... Moreover, we experimentally set τ1 = 30 and τ2 = 10 to speed up the computation. |