Local Centroids Structured Non-Negative Matrix Factorization
Authors: Hongchang Gao, Feiping Nie, Heng Huang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both toy datasets and real-world datasets have veriļ¬ed the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, University of Texas at Arlington, Texas, USA 2School of Computer Science, OPTIMAL, Northwestern Polytechnical University, Xian 710072, Shaanxi, P. R. China |
| Pseudocode | Yes | Algorithm 1 Algorithm to solve Eq. (15) and Algorithm 2 Algorithm to solve Eq. (7) |
| Open Source Code | No | The paper does not provide any explicit statement or link for the availability of its source code. |
| Open Datasets | Yes | ORL (Samaria and Harter 1994) is a face recognition benchmark dataset. UMIST (Graham and Allinson 1998) is also a face recognition benchmark dataset. PIE (Sim, Baker, and Bsat 2002) is another face recognition benchmark dataset, COIL20 (Nene et al. 1996) is an object recognition benchmark dataset. |
| Dataset Splits | No | The paper mentions using benchmark datasets but does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts) or the methodology for such splits for reproduction. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies (e.g., programming language versions, library names with versions) used for the experiments. |
| Experiment Setup | Yes | In our experiment, we use K-means to initialize F and G, and k is set as 10. Thus, this toy dataset is clustered into 10 groups by K-means... Additionally, the parameter s in Eq. (7) is set as 2... We run all the methods for 10 times... Here, we set the number of centroids k in our method around 80%-90% of the number of data points in each cluster, and each data point is restricted to be represented by 3-5 nearby centroids. |