On Estimation in Latent Variable Models
Authors: Guanhua Fang, Ping Li
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Various numerical results corroborate our theory. Numerical results are presented in Section 6 and validate our theory. 6. Numerical Experiments |
| Researcher Affiliation | Industry | Guanhua Fang, Ping Li Cognitive Computing Lab Baidu Research 10900 NE 8th St Bellevue WA 98004 USA {guanhuafang, liping11}@baidu.com |
| Pseudocode | Yes | Algorithm 1 Latent Stochastic Gradient Algorithm, Algorithm 2 Latent Network Stochastic Gradient Algorithm. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper mentions using "NESARC Data" and "PISA Data" but does not provide explicit links, DOIs, repositories, or formal citations (with authors and year) to indicate their public availability for download. |
| Dataset Splits | No | The paper generates synthetic data and uses two real datasets (NESARC, PISA) but does not provide specific details on how the data was split into training, validation, or test sets, or if cross-validation was employed. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or specific solver versions). |
| Experiment Setup | Yes | We set m = n1 = n2α/3 and γ = nα (α = 1.2). We use the similar strategy to choose m, n1 and γ as that in DINA model setting with α = 0.9. (λ = log n/(n1/2γ), a = 3.) |