Anchors Bring Ease: An Embarrassingly Simple Approach to Partial Multi-View Clustering

Authors: Jun Guo, Jiahui Ye118-125

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we extensively evaluate the proposed method on five benchmark datasets. Experimental results demonstrate the superiority of APMC over state-of-the-art approaches.
Researcher Affiliation Academia Jun Guo, Jiahui Ye Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen 518055, China
Pseudocode Yes Algorithm 1: APMC
Open Source Code No The paper does not provide any explicit statement about releasing its source code or a link to a code repository for the described methodology.
Open Datasets Yes USPS-MNIST Dataset merges two famous handwritten datasets: USPS (Hull 1994) and MNIST (Le Cun et al. 1998)... Oxford Flowers Dataset (Flowers) (Nilsback and Zisserman 2006)... Multiple Features Handwritten Dataset (Digit) (Jain, Duin, and Mao 2000)... 3Sources Dataset (Greene and Cunningham 2009)
Dataset Splits No The paper describes how partial data is generated and that experiments are repeated for average performance and standard deviation, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts) or a cross-validation scheme for model evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, or memory specifications).
Software Dependencies No The paper mentions that 'All results are produced by released codes' for comparison methods, but it does not specify the software dependencies or their version numbers required for running the proposed APMC method or its experiments.
Experiment Setup Yes Our proposed APMC method has only one parameter m to be fine-tuned. We set PDR from 0% to 90% as aforementioned, and explore the clustering performance of APMC by ranging m within {2, 4, ..., 14}. ... we rescale the input data into the range of [0, 1], then conduct normalization before we run all these clustering methods.