LRSC: Learning Representations for Subspace Clustering

Authors: Changsheng Li, Chen Yang, Bo Liu, Ye Yuan, Guoren Wang8340-8348

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are performed on four publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared to state-of-the-art methods.
Researcher Affiliation Collaboration Changsheng Li1*, Chen Yang2, Bo Liu3, Ye Yuan1, Guoren Wang1 1School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 2School of Computer Science and Engineering, University of Electronic Science and Technology of China 3JD Digits
Pseudocode Yes Algorithm 1: Leveraging External Data. Algorithm 2: Subspace Clustering on Target Tasks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository for the described methodology.
Open Datasets Yes We evaluate the performance of our proposed approaches on four publicly available datasets: Fashion-MNIST dataset... not MNIST... BBC dataset... 20 Newsgroups dataset... We use the mini Image Net (Vinyals et al. 2016) dataset as the meta data for two image datasets, and use the 20newsgroup dataset as the meta data for the BBC dataset.
Dataset Splits No The paper mentions selecting a certain number of samples from datasets (e.g., "We randomly select 1,000 images from each class"), and mentions "validation loss" in Algorithm 1, which relates to the meta-learning training process. However, it does not provide specific training/test/validation dataset splits (e.g., percentages or counts) for the main subspace clustering task where performance metrics are reported, which is necessary for reproducibility of the data partitioning.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments. It only describes the model architecture.
Software Dependencies No The paper mentions the use of 'rectified linear unit (Re LU)' as an activation function but does not provide specific software libraries, frameworks, or their version numbers (e.g., PyTorch 1.9, TensorFlow 2.x) that would be needed to reproduce the experiment.
Experiment Setup No The paper states: "For hyper-parameter setting, we report the detailed setting in the supplementary material." This indicates that the details are not available in the main text of the paper, which is required by the prompt.