Adaptive Feature Imputation with Latent Graph for Deep Incomplete Multi-View Clustering
Authors: Jingyu Pu, Chenhang Cui, Xinyue Chen, Yazhou Ren, Xiaorong Pu, Zhifeng Hao, Philip S. Yu, Lifang He
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple real-world datasets demonstrate the effectiveness of our method over existing approaches. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China 2Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, China 3College of Science, Shantou University, Shantou, China 4Department of Computer Science, University of Illinois Chicago, Chicago, IL, USA 5Department Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA |
| Pseudocode | Yes | Algorithm 1 summarizes the optimization procedure of AGDIMC. |
| Open Source Code | No | The paper mentions using source codes for 'comparing methods' but does not provide any statement or link for the open-source code of their proposed method (AGDIMC). |
| Open Datasets | Yes | Three widely used and publicly available multi-view datasets are used in our study: BDGP (Cai et al. 2012), Handwritten Numerals (HW) represented by six kinds of features extracted from its binary image. Reuters is comprised of 1200 articles in 6 categories... |
| Dataset Splits | No | The paper mentions evaluating performance on different 'missing rates' for data incompleteness but does not specify the train/validation/test dataset splits with exact percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU models, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using Adam for optimization and ReLU as an activation function but does not provide specific version numbers for any programming languages, libraries, or software dependencies (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | The trade-off coefficient α is set to 0.5 and the number of neighbors k applied in k NN graph algorithm is set to 10. The dimensionality of embeddings Zv is reduced to 10. All the autoencoders are pre-trained for 2000 epochs. The batch size is set to the instance number like most GNNbased methods. We adopt Adam (Kingma and Ba 2014) to optimize the deep models with a learning rate of 0.001. |