Minimax Optimal Rate for Parameter Estimation in Multivariate Deviated Models
Authors: Dat Do, Huy Nguyen, Khai Nguyen, Nhat Ho
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 5, we carry out a simulation study to empirically verify our theoretical results before concluding the paper in Section 6. Rigorous proofs and additional results are deferred to the supplementary material. |
| Researcher Affiliation | Academia | Dat Do* Department of Statistics University of Michigan at Ann Arbor Ann Arbor, MI 48109 dodat@umich.edu Huy Nguyen* Department of Statistics and Data Sciences The University of Texas at Austin Austin, TX 78712 huynm@utexas.edu Khai Nguyen Department of Statistics and Data Sciences The University of Texas at Austin Austin, TX 78712 khainb@utexas.edu Nhat Ho Department of Statistics and Data Sciences The University of Texas at Austin Austin, TX 78712 minhnhat@utexas.edu |
| Pseudocode | No | The paper focuses on theoretical derivations and analysis of convergence rates, and it does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about making its source code available or provide links to a code repository. |
| Open Datasets | No | Section 5 describes generating i.i.d. samples from defined density functions (Cauchy and normal distribution) for simulation. It does not use or provide access to pre-existing public datasets for training. |
| Dataset Splits | No | The paper generates synthetic data for its simulations. It does not specify any training, validation, or test dataset splits in terms of percentages or counts for reproducibility. It varies the sample size 'n' and repeats the procedure to measure errors. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the simulations (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions using the 'EM algorithm [13]' for MLE calculations but does not specify any software names with version numbers for implementation (e.g., Python, PyTorch, specific libraries). |
| Experiment Setup | Yes | We consider two following cases: (i) λ = 0.5, µ = 2.5, (σ )2 = 0.25; (ii) λ = 0.5/n1/4, µ = 2.5, (σ )2 = 0.25. ... For each sample size n, we calculate the MLE (ˆλn, ˆµn, ˆσ2 n) via the EM algorithm [13] and measure the errors |bλn λ |, |bµn µ |, and |bσ2 n (σ )2|. We repeat this procedure 64 times and plot the mean (blue dot) and quartile error bars (yellow bar) of the logarithm of estimation errors against the log of n. |