Alternating Minimization for Regression Problems with Vector-valued Outputs
Authors: Prateek Jain, Ambuj Tewari
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments show that our approach is efficient and robust. |
| Researcher Affiliation | Academia | J. Lu is with the Department of Mathematics, Duke University, Durham, NC 27708, USA (e-mail: jianfeng.lu@duke.edu). J. Ma is with the Department of Mathematics, University of California, Davis, CA 95616, USA (e-mail: jfma@math.ucdavis.edu). Y. Liu is with the Department of Biostatistics, Harvard University, Boston, MA 02115, USA (e-mail: yliu@hsph.harvard.edu). Y. Wang is with the Department of Statistics, George Mason University, Fairfax, VA 22030, USA (e-mail: ywang25@gmu.edu). |
| Pseudocode | Yes | Algorithm 1: AM for Ridge Regression |
| Open Source Code | No | The paper does not contain any explicit statement about making its source code available, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper states, 'The synthetic datasets are generated as follows. The true regression coefficients βˆ are randomly generated from the standard normal distribution N(0, 1) and then normalized to satisfy kβˆkF = 1.' This indicates the use of synthetically generated data, not a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper describes how synthetic data is generated but does not specify any training, validation, or test dataset splits, percentages, or cross-validation setup. |
| Hardware Specification | No | The paper refers to 'average CPU time' in its numerical experiments section but does not provide any specific details about the hardware used, such as CPU models, GPU types, or memory specifications. |
| Software Dependencies | No | The paper does not list any specific software dependencies with their version numbers. |
| Experiment Setup | No | The paper specifies model parameters such as regularization parameters (e.g., 'For the Structured Lasso regression, the regularization parameters are λ1 = 0.005 and λ2 = 0.005. For Ridge regression, the regularization parameter is λ = 0.005.') and details about synthetic data generation, but it does not provide specific training hyperparameters like learning rate, batch size, number of epochs, or optimizer settings. |