Rates of Convergence of Spectral Methods for Graphon Estimation
Authors: Jiaming Xu
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We improve on the previously known error rates of polynomial-time estimators, by analyzing a spectral method, namely universal singular value thresholding (USVT) algorithm. When f belongs to either H older or Sobolev space with smoothness index α, we show the error rates of USVT are at most (nρ) 2α/(2α+d). These error rates approach the minimax optimal error rate log(nρ)/(nρ) proved in prior work for d = 1, as α increases, i.e., f becomes smoother. Furthermore, when f is analytic with infinitely many times differentiability, we show the error rate of USVT is at most logd(nρ)/(nρ). When f is a step function which corresponds to the stochastic block model with k blocks for some k, the error rate of USVT is at most k/(nρ), which is larger than the minimax optimal error rate by at most a multiplicative factor k/ log k. This coincides with the computational gap observed in community detection. A key ingredient of our analysis is to derive the eigenvalue decaying rate of the edge probability matrix using piecewise polynomial |
| Researcher Affiliation | Academia | 1The Fuqua School of Business, Duke University, Durham, NC, USA. |
| Pseudocode | Yes | Algorithm 1 Universal Singular Value Thresholding (USVT) (Chatterjee, 2015) |
| Open Source Code | No | The paper refers to a 'full paper (Xu, 2017)' for excluded details but does not provide a link to or statement about the availability of its own source code. |
| Open Datasets | No | We provide numerical results on synthetic datasets, which corroborate our theoretical results. Additional numerical results on stochastic block models can be found in the full paper (Xu, 2017). We assume the sparsity level ρ is known and set the threshold τ = 2.01 nρ throughout the experiments. |
| Dataset Splits | No | The paper mentions using 'synthetic datasets' but does not provide specific details on how the data was split into training, validation, or test sets, nor does it refer to standard, pre-defined splits. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) are mentioned in the paper. |
| Experiment Setup | Yes | We assume the sparsity level ρ is known and set the threshold τ = 2.01 nρ throughout the experiments. In the case where ρ is unknown, one can apply cross-validation procedure to adaptively choose the sparsity level ρ as shown in (Gao et al., 2016). We first apply USVT with input (A, τ, ρ), and then output the estimator c M, and finally calculate the MSE error MSE(c M). |