Deep Variational Incomplete Multi-View Clustering: Exploring Shared Clustering Structures

Authors: Gehui Xu, Jie Wen, Chengliang Liu, Bing Hu, Yicheng Liu, Lunke Fei, Wei Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four datasets show that our method achieves competitive clustering performance compared with state-of-the-art methods. Experiments Experimental Settings Datasets. Four real-world datasets are used in our experiments...
Researcher Affiliation Academia 1Shenzhen Key Laboratory of Visual Object Detection and Recognition, Harbin Institute of Technology, Shenzhen, China 2School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China
Pseudocode Yes Algorithm 1: Optimization of the proposed method
Open Source Code Yes The source codes of our method based on Pytorch and Mindspore are released at https://sites.google.com/view/jerry-wen-hit/publications.
Open Datasets Yes Four real-world datasets are used in our experiments, namely Caltech7-5V (Fei-Fei, Fergus, and Perona 2007; Li et al. 2015), Scene-15 (Fei-Fei and Perona 2005), Multi-Fashion (Xiao, Rasul, and Vollgraf 2017), and Noisy MNIST (Wang et al. 2015; Basu et al. 2017).
Dataset Splits No The paper describes how incomplete data is constructed and mentions training for 200 epochs, but does not specify explicit train/validation/test dataset splits.
Hardware Specification Yes We implement the experiments on Linux with an NVIDIA 4090 GPU.
Software Dependencies No The paper mentions 'Pytorch', 'Mindspore', and 'Adam optimizer' but does not specify their version numbers.
Experiment Setup Yes Specifically, for each view, we adopt a fully connected network with the layer dimensions of Dv-500-500-2000-10 (10-2000-500-500 Dv) as the encoder (decoder)... The learning rate for Adam optimizer is set to 0.0005 for the encoderdecoders parameters and 0.05 for the latent Mo G prior parameters, both with a decay rate of 0.9 every 10 epochs. The training batch size is set as 512 for Noisy MNIST and set as 256 for the other three datasets. The regularization parameter is set to 5 for Caltech7-5V, 10 for Noisy MNIST and Multi-Fashion, and 20 for Scene-15.