Robust Manifold Matrix Factorization for Joint Clustering and Feature Extraction
Authors: Lefei Zhang, Qian Zhang, Bo Du, Dacheng Tao, Jane You
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results in both clustering and feature extraction perspectives demonstrate the superior performance of the proposed method. In this section, we evaluate the performance of the proposed RMMF method on the benchmark datasets (Tables 1 and 2). We divide this section into two parts to report the experimental results of clustering and feature extraction, respectively (Tables 3 and 4). |
| Researcher Affiliation | Collaboration | Lefei Zhang School of Computer Wuhan University, Wuhan, China zhanglefei@whu.edu.cn; Qian Zhang Alibaba Group Beijing, China qianzhang.zq@alibaba-inc.com; School of Computer Wuhan University, Wuhan, China remoteking@whu.edu.cn; Dacheng Tao Centre for Artificial Intelligence University of Technology Sydney, Australia dacheng.tao@uts.edu.au; Jane You Department of Computing The Hong Kong Polytechnic University, Hong Kong csyjia@comp.polyu.edu.hk |
| Pseudocode | Yes | The objective function in above eq. (9) is not convex in four variables but is convex if we update the four variables alteratively. Thus, we use Augmented Lagrangian Method (ALM) to optimize the objective function. By introducing four auxiliary variables E1 = X PY , E2 = Y V U T, Z1 = Y and Z2 = U. The objective function can be rewritten into the following equivalent problem: ... which can be solved by the following ALM problem: ... Since the objective function above carries eight variables and additional multipliers, we adopt an alternative optimization method to reduce it to a few manageable subproblems with the closed-form solution, each minimizes the objective function with respect to one variable while fixing the other variables. The detailed information is given in Appendix. (Appendix then details steps like 'Update E1', 'Update E2', etc., with mathematical formulas). |
| Open Source Code | No | No explicit statement or link providing access to the source code for the described methodology. |
| Open Datasets | Yes | In this section, we evaluate the performance of the proposed RMMF method on the benchmark datasets (Tables 1 and 2). |
| Dataset Splits | No | No explicit mention of a 'validation' dataset split or how validation was specifically performed. While parameters are tuned ('we tune the regularization parameters for all methods by a grid-search strategy'), the paper does not specify a distinct validation set used for this tuning. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) are provided for the experimental setup. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions) are mentioned. |
| Experiment Setup | Yes | To fairly compare different methods, we tune the regularization parameters for all methods by a grid-search strategy from the same range of 10[ 5, 4,...,5]. In addition, the EUFS, SMCE, and RMMF are joint dimension reduction and clustering algorithms, which require the subspace dimensionality d as an input, in the experiments, we tune this parameter by using the candidate values which are no more than l/2 for various datasets respectively, and report the best performance. For LPP and NPE, we search the parameter k in the range of [2,4,...,20] and the parameter t in the range of 10[ 3, 2,...,3] for LPP. A lazy classifier, i.e., the knearest-neighbor with k=1 is used for classification. |