Flexible Multi-View Representation Learning for Subspace Clustering

Authors: Ruihuang Li, Changqing Zhang, Qinghua Hu, Pengfei Zhu, Zheng Wang

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on real-world datasets show that our method achieves superior clustering performance over other state-of-the-art methods.
Researcher Affiliation Academia Ruihuang Li1 , Changqing Zhang1,2 , Qinghua Hu1 , Pengfei Zhu1 and Zheng Wang1 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing University of Posts and Telecommunications, Nanjing, China
Pseudocode Yes Algorithm 1: Optimization of our method
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for their proposed method (FMR) is openly available.
Open Datasets Yes We conduct experiments on 7 datasets from different applications: images, text, and community networks. Yale1 consists of 165 grayscale images of 15 individuals, from which 3 types of features are extracted. MSRC-v1 [Xu et al., 2016] consists of 210 images of 7 object classes, which includes 6 types of features. Notting-Hill [Wu et al., 2013] is a video face dataset consisting of 550 images of 5 main casts described from 3 different views. Reuters [Amini et al., 2009] is a multilingual dataset including 2000 newswire articles of 6 classes written in 5 languages (views). BBCSport2 is composed of news articles in 5 topical areas from BBC website, which is associated with 2 views. Football3 contains 248 English Premier League football players and clubs active on Twitter, which are described from 9 different views and associated with 20 clubs. ANIMAL [Lampert et al., 2014] contains 30475 images of 50 animal classes including 2 types of feature. 10158 samples are selected with fixed interval to generate a subset.
Dataset Splits No The paper describes the datasets used and the comparison of methods, but does not explicitly state the train/validation/test dataset splits needed to reproduce the experiment, nor does it refer to predefined splits with citations for these specific datasets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, or other computer specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies, such as programming languages, libraries, or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes In our experiments, we set the dimensionality of latent representation as 200 and tune hyperparameters λ1 and λ2 from {10 5, 10 4, , 10 1, 100} and {10 10, 10 9, , 10 3, 10 2}, respectively.