Deep Safe Incomplete Multi-view Clustering: Theorem and Algorithm
Authors: Huayi Tang, Yong Liu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate that the proposed method achieves superior performance and efficient safe incomplete multi-view clustering. (from abstract) and in section 4 titled 'Experiments', it includes subsections like '4.1 Experimental Setup' and '4.2 Experimental Results', featuring tables and figures with performance metrics and comparisons. |
| Researcher Affiliation | Academia | 1Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 2Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China. Correspondence to: Yong Liu <liuyonggsai@ruc.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Deep Safe Incomplete Multi-view Clustering |
| Open Source Code | No | The paper states, 'We report the results of baseline methods obtained by running the open-source code with default settings', but it does not provide any statement or link for the open-source code of the proposed DSIMVC method. |
| Open Datasets | Yes | BDGP (Cai et al., 2012) is a drosophila embryos image dataset... MNIST-USPS (Peng et al., 2019) contains 5,000 samples... Columbia Consumer Video (CCV) (Jiang et al., 2011) is composed of 6,773 samples... Multi-Fashion is a two-view dataset constructing from Fashion-MNIST (Xiao et al., 2017)... |
| Dataset Splits | No | The paper does not explicitly describe a validation dataset split. It discusses using 'complete data' and 'incomplete data' for training the clustering model and evaluating performance metrics, but does not provide details on specific training, validation, and test set splits in terms of percentages or counts for reproducibility beyond the inherent nature of clustering on the provided datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud resource specifications used for running the experiments. |
| Software Dependencies | No | The implementation is based on PyTorch (Paszke et al., 2019) platform. We adopt the Faiss library (Johnson et al., 2019) to search for the nearest neighbors based on learned features. However, specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | The learning rate ηw and ηϕ are set as 0.0003 and 0.0004. The batch size is set to 256 for all datasets. The trade-off parameter γ and the number of neighbors k are empirically set to 0.5 and 3, respectively. |