Rate-Optimal Subspace Estimation on Random Graphs
Authors: Zhixin Zhou, Fan Zhou, Ping Li, Cun-Hui Zhang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on the algorithms appear in Section 4. In this section, each experiment will repeat 100 times. In each iteration, the randomization procedure follows these steps: |
| Researcher Affiliation | Collaboration | 1Department of Management Sciences, City University of Hong Kong 2Cognitive Computing Lab, Baidu Research 3Department of Statistics, Rutgers University |
| Pseudocode | Yes | Algorithm 1 Hard Singular Value Thresholding Algorithm 2 Soft Singular Value Thresholding Algorithm 3 Singular space estimation |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The numerical experiments in Section 4 describe a procedure to 'Randomly generate matrices' and 'Generate the adjacency matrix of the random bipartite graph with connectivity matrix M', indicating the use of synthetically generated data rather than a publicly available dataset. |
| Dataset Splits | No | The paper describes a simulation setup where data is randomly generated for each experiment iteration (e.g., 'Randomly generate matrices M1...', 'Generate the adjacency matrix...'). It does not specify train, validation, or test splits of a fixed dataset. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks) used in the experiments. |
| Experiment Setup | Yes | We consider the following parameters in Θ1(n1, n2, r, p). n1 = n2 = 1000, r = 3, p = 0.01, 0.03, 0.05. In the following experiments, we vary the regularization constant c from 0.2 to 1, where the default constant equals to 2 in Algorithm 1. |