CDIMC-net: Cognitive Deep Incomplete Multi-view Clustering Network

Authors: Jie Wen, Zheng Zhang, Yong Xu, Bob Zhang, Lunke Fei, Guo-Sen Xie

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on several incomplete datasets show that CDIMC-net outperforms the state-of-the-art incomplete multi-view clustering methods.
Researcher Affiliation Academia 1Bio-Computing Research Center, Harbin Institute of Technology, Shenzhen, Shenzhen, China 2Shenzhen Key Laboratory of Visual Object Detection and Recognition, Shenzhen, China 3Pengcheng Laboratory, Shenzhen, China 4Department of Computer and Information Science, University of Macau, Taipa, Macau, China 5School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China 6Inception Institute of Artificial Intelligence, Abu Dhabi, UAE
Pseudocode Yes Algorithm 1: Fine-tuning and clustering of CDIMC-net
Open Source Code No The paper does not provide any explicit statements about releasing code for their method or a link to a code repository.
Open Datasets Yes Databases: Three databases listed in Table 1 are adopted. 1) Handwritten [Asuncion and Newman, 2007]... 2) Berkeley Drosophila Genome Project gene expression pattern database (BDGP)... [Cai et al., 2012]. 3) MNIST [Le Cun, 1998].
Dataset Splits No Incomplete data construction: For the data with more than two views, we randomly remove p% (p {10, 30, 50, 70}) instances from every view under the condition that all samples at least have one view. For MNIST database, p% (p {10, 30, 50, 70}) instances are randomly selected as paired samples whose views are complete, and the remaining samples are treated as single view samples, where half of them only have the first view and the other half of the samples only have the second view. While this describes how incomplete data instances are generated, it doesn't describe a separate 'validation' split from the overall dataset for hyperparameter tuning, etc.
Hardware Specification No CDIMC-net is implemented on Py Torch and Ubuntu Linux 16.04. No specific hardware details (GPU/CPU models, memory) are mentioned.
Software Dependencies No CDIMC-net is implemented on Py Torch and Ubuntu Linux 16.04. While software is mentioned, specific version numbers for PyTorch are not provided.
Experiment Setup Yes For CDIMC-net, the encoder and decoder networks are stacked by four full connected layers with size of [0.8mv, 0.8mv, 1500, k] and [k, 1500, 0.8mv, 0.8mv], respectively. The activation function is Re LU and the optimizer is SGD for the pre-training network and ADAM for the fine-tuning network. And Algorithm 1 mentions 'parameter α; Maximum iterations: T; Maximum iterations for inner loop: Maxiter; Batch size: bs; Stopping threshold: ξ'.