Simultaneous Representation Learning and Clustering for Incomplete Multi-view Data
Authors: Wenzhang Zhuge, Chenping Hou, Xinwang Liu, Hong Tao, Dongyun Yi
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct experiments to verify the proposed SRLC. Firstly, we compare SRLC with five state-of-the-art methods on partial multi-view clustering task. Then we study the impact of hyper-parameters and finally present the results about convergence behavior. |
| Researcher Affiliation | Academia | Wenzhang Zhuge1 , Chenping Hou1 , Xinwang Liu2 , Hong Tao1 and Dongyun Yi1 1College of Liberal Arts and Sciences, National University of Defense Technology, Changsha, China 2School of Computer, National University of Defense Technology, Changsha, China |
| Pseudocode | Yes | Algorithm 1 Algorithm to solve the problem (12) and Algorithm 2 Optimization of SRLC |
| Open Source Code | No | No statement or link providing access to the open-source code for the methodology described in the paper was found. |
| Open Datasets | Yes | The experiments are conducted on six real-world datasets: Microsoft Research Cambridge Volume 1 (MSRC)1, Caltech72, Handwritten digits (Dights)3, ORL4, Yale5, Web KB6. (Footnotes provide URLs for these datasets) |
| Dataset Splits | No | No explicit training/validation/test dataset splits with specific percentages or counts were provided. The paper mentions creating incomplete data by randomly removing examples and repeating experiments, but not how the data was split for model training and evaluation in terms of distinct sets. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers were provided in the paper. |
| Experiment Setup | Yes | The proposed SRLC has two hyper-parameters {λ, γ}. λ is tuned in the range of {101, 101.5, 102, 102.5, 103} while γ is tuned in the range of {1.1, 1.3, 1.5, 1.7, 1.9}. The experiments are conducted on datasets MSRC and Caltech7 with PER=30% and set the hyper-parameters {λ, γ} as {102, 1.1} respectively. |