Adversarial Incomplete Multi-view Clustering

Authors: Cai Xu, Ziyu Guan, Wei Zhao, Hongchang Wu, Yunfei Niu, Beilei Ling

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on three datasets show that AIMC performs well and outperforms baseline methods. We evaluate the clustering performance of AIMC on three datasets. Important statistics are summarized in Table 1 and a brief introduction of the datasets is presented below.
Researcher Affiliation Academia Cai Xu, Ziyu Guan , Wei Zhao, Hongchang Wu, Yunfei Niu and Beilei Ling State Key Lab of ISN, School of Computer Science and Technology, Xidian University {cxu 3@stu., zyguan@, ywzhao@mail., hcwu@stu., yfniu@stu., blling@stu.}xidian.edu.cn
Pseudocode No The paper describes the implementation steps in Section 3.4 but does not include a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We evaluate the clustering performance of AIMC on three datasets. Important statistics are summarized in Table 1 and a brief introduction of the datasets is presented below. Reuters [Amini et al., 2009] consists of 111740 documents written in 5 languages of 6 categories represented as TFIDF vectors. BDGP [Cai et al., 2012] contains 2500 instances about drosophila embryos of 5 categories. Youtube [Omid et al., 2013] contains 92457 instances from 31 categories, each described by 13 feature types.
Dataset Splits No The paper describes how incomplete data is generated by randomly removing views from instances, but it does not specify explicit training, validation, and test dataset splits with percentages or sample counts. It states: 'As in [Hu and Chen, 2018], we randomly select e N instances as incomplete data and randomly remove some views from each of them.'
Hardware Specification No The paper states 'Our model is implemented by Py Torch and run on Ubuntu Linux 16.04.' but does not provide specific hardware details such as CPU/GPU models or memory.
Software Dependencies No The paper mentions 'Our model is implemented by Py Torch and run on Ubuntu Linux 16.04.' but does not provide specific version numbers for PyTorch or other software dependencies.
Experiment Setup Yes We use the adaptive moment (Adam) optimizer to train our model and set the learning rate to 0.0001. Our model is implemented by Py Torch and run on Ubuntu Linux 16.04. We set missing rate as 0.2, and report the accuracy by varying α and β in the set {10 4, 10 3, 10 2, 10 1, 1, 10}. Based on the results, we set α = 0.001, β = 0.01 in other experiments. The optimization procedure of AIMC typically converges in around 5 epochs.