Large-scale Subspace Clustering by Fast Regression Coding
Authors: Jun Li, Handong Zhao, Zhiqiang Tao, Yun Fu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results verified that our method can be successfully applied into the LSSC problem. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA. 2College of Computer and Information Science, Northeastern University, Boston, MA, 02115, USA. |
| Pseudocode | Yes | Algorithm 1 FRC via the gradient descent algorithm |
| Open Source Code | No | The paper mentions '8We use the codes from https://github.com/kyunghyuncho/deepmat.' which refers to a third-party implementation (DAE) and does not provide a link or statement about the availability of the authors' own source code for FRC or RCC. |
| Open Datasets | Yes | We evaluate our approach on four databases: Extended-Yale B4, AR5, MNIST6, and JCNORB7 in Table 2. ... 4http://vision.ucsd.edu/ leekc/Ext Yale Database/Ext Yale B.html, 5http://www2.ece.ohio-state.edu/ aleix/ARdatabase.html, 6http://yann.lecun.com/exdb/mnist/, 7http://www.cs.nyu.edu/ ylclab/data/norb-v1.0/ |
| Dataset Splits | No | For large-scale datasets, 2000 and 2400 samples are respectively selected as the training data in MNIST, and JCNORB. The paper describes selecting training samples and discusses test data, but it does not specify a separate validation dataset split with percentages or counts for reproducing the experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions general algorithms and techniques (e.g., 'ridge regression', 'gradient descent algorithm', 'neural networks') and refers to 'matlab implementation' in a citation, but does not provide specific version numbers for any software dependencies or libraries used in their implementation. |
| Experiment Setup | Yes | The learning rate ε, as a typical parameter in neural networks, is set to 0.0001 in all experiments. ... We set α = 25 and 250 respectively for MNIST and JCNORB to obtain the best results. ... In all experiments, the number of hidden units h is set to 1000, the parameter γ is set to 0.0001, tanh is voted as the activation function, and the number of training epochs Tm is less than 10. |