Estimation of Markov Chain via Rank-Constrained Likelihood
Authors: Xudong Li, Mengdi Wang, Anru Zhang
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.Finally, the performance of the proposed estimator and algorithm is verified through simulation studies.We compare four estimation procedures, i.e., maximum likelihood estimator (mle), truncated SVD estimator (svd) (Zhang & Wang, 2017), nuclear norm penalized estimator (nu), and our rank-constrained maximum likelihood estimator (rank). |
| Researcher Affiliation | Academia | 1Department of Operations Research and Financial Engineering, Princeton University, Sherrerd Hall, Princeton, NJ 08544 2Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706. Correspondence to: Xudong Li <xudongl@princeton.edu>, Mengdi Wang <mengdiw@princeton.edu>, Anru Zhang <anruzhang@stat.wisc.edu>. |
| Pseudocode | Yes | Algorithm 1 A PDC algorithm for solving problem (7)Algorithm 2 An s GS-ADMM for solving (D)Algorithm 3 A majorized indefinite-proximal DC algorithm for solving problem (9) |
| Open Source Code | No | The paper does not provide any specific links to source code repositories or explicitly state that the code for their methodology is publicly available. |
| Open Datasets | No | The paper describes generating synthetic data ('We first generate the rank-r transition matrix as P = UΣVT...Then we simulate a Markov chain trajectory...') for its experiments rather than using a publicly available or open dataset. No access information (link, DOI, citation) is provided for a dataset. |
| Dataset Splits | No | The paper does not explicitly provide details about training/validation/test dataset splits. It simulates a single Markov chain trajectory and uses that for estimation, without partitioning into distinct sets for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide any specific details about ancillary software, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We first generate the rank-r transition matrix as P = UΣVT , where U, V ℜp r have orthonormal columns and the diagonal matrix Σ = diag(σi) ℜr r with σi > 0 for i = 1, . . . , r. Then we simulate a Markov chain trajectory of length n = C2rp log(p). Here, r = 10, p = 500 and C is a varying constant from 10 to 102.Given c > 0, α 0, and the stopping tolerance η, choose initial point X0 ℜp p.Input: initial point (Ξ0, y0, S0, Z0, X0), penalty parameter σ > 0, maximum iteration number K, and the steplength γ (0, (1 + 5)/2).In the implementation of nuclear norm penalized estimator nu, the penalty parameter is chosen via 5-fold cross-validation. |