Learning Feature Sparse Principal Subspace

Authors: Lai Tian, Feiping Nie, Rong Wang, Xuelong Li

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show the promising performance and efficiency of the new algorithms compared with the state-of-the-arts on both synthetic and real-world datasets.
Researcher Affiliation Academia Lai Tian School of Computer Science & Center for OPTIMAL, Northwestern Polytechnical University, Xi an 710072, China. tianlai.cs@gmail.com Feiping Nie School of Computer Science & Center for OPTIMAL, Northwestern Polytechnical University, Xi an 710072, China. feipingnie@gmail.com Rong Wang School of Cybersecurity & Center for OPTIMAL, Northwestern Polytechnical University, Xi an 710072, China. wangrong07@tsinghua.org.cn Xuelong Li School of Computer Science & Center for OPTIMAL, Northwestern Polytechnical University, Xi an 710072, China. li@nwpu.edu.cn
Pseudocode Yes Algorithm 1 Go for rank(A) m
Open Source Code No No explicit statement or link providing concrete access to source code for the methodology described in this paper was found.
Open Datasets Yes We consider real-world datasets, including Lymphoma (biology) [48], NUS-WIDE (web images) [10], and Numerical Numbers (handwritten numbers) [3].
Dataset Splits No The paper does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts) needed for reproduction. It mentions repeated runs and fixed parameters for synthetic data.
Hardware Specification Yes All experiments in this paper were run on MATLAB 2018a with a 2.3 GHz Quad-Core Intel Core i5 CPU and 16GB memory MBP.
Software Dependencies Yes All experiments in this paper were run on MATLAB 2018a with a 2.3 GHz Quad-Core Intel Core i5 CPU and 16GB memory MBP.
Experiment Setup Yes For the synthetic data, we fix m = 3, k = 7,and d = 20. We cannot afford large-scale setting since the brute-force searching space grows exponentially. We consider three different initialization methods: Random Subspace; Convex Relaxation proposed in [41] and used in [43]; Low Rank Approx. with GO(Am, m, k, d). In our experiments, we always use Aε with ε = 0.1 to keep safe (Remark 5.10).