Minimax Estimation of Bandable Precision Matrices

Authors: Addison Hu, Sahand Negahban

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our theoretical results are complemented by experiments demonstrating the sharpness of our bounds. To supplement our analysis, we conduct numerical experiments to explore the performance of our estimator in the finite sample setting. The numerical experiments confirm that even in the finite sample case, our proposed estimator exhibits the minimax rate of convergence. In Section 4, our estimator is subjected to numerical experiments. We implemented the blockwise inversion technique in Num Py and ran simulations on synthetic datasets. Our experiments confirm that even in the finite sample case, the blockwise inversion technique achieves the theoretical rates. We observe in Figure 1a that the spectral norm error increases linearly as log p increases, confirming the log p n term in the rate of convergence. As with Figure 1a, Figure 1b confirms the minimax rate of convergence given in Theorem 3.1.
Researcher Affiliation Academia Addison J. Hu Department of Statistics and Data Science Yale University New Haven, CT 06520 addison.hu@yale.edu Sahand N. Negahban Department of Statistics and Data Science Yale University New Haven, CT 06520 sahand.negahban@yale.edu
Pseudocode Yes Algorithm 1 Blockwise Inversion Technique
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No In the experiments, we draw observations from a multivariate normal distribution with precision parameter Ω Fα, as defined in (3). Though the precision matrices considered in our experiments are Toeplitz, our estimator does not take advantage of this knowledge. We choose ρ = 0.6 to ensure that the matrices generated are non-negative definite. This indicates synthetic data generation rather than the use of a publicly available dataset.
Dataset Splits No The paper describes running simulations on synthetic datasets and varying parameters n and p, but it does not specify any training/validation/test dataset splits or cross-validation procedures.
Hardware Specification No The paper states that simulations were run, but it does not provide any specific details about the hardware (e.g., CPU, GPU, memory) used for these experiments.
Software Dependencies No We implemented the blockwise inversion technique in Num Py. While NumPy is mentioned, no specific version number is provided for it or any other software dependency.
Experiment Setup Yes In applying the tapering estimator as defined in (7), we choose the bandwidth to be k = n 1 2α+1 , which gives the optimal rate of convergence, as established in Theorem 3.1. In our experiments, we varied α, n, and p. For our first set of experiments, we allowed α to take on values in {0.2, 0.3, 0.4, 0.5}, n to take values in {250, 500, 750, 1000}, and p to take values in {100, 200, 300, 400}. Each setting was run for five trials, and the averages are plotted with error bars to show variability between experiments. We provide an additional sets of trials for the α = 0.2, p = 400 case, with n {11000, 3162, 1670}.