Learning No-Regret Sparse Generalized Linear Models with Varying Observation(s)
Authors: Diyang Li, Charles Ling, zhiqiang xu, Huan Xiong, Bin Gu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Encouraging results are exhibited on real-world benchmarks. |
| Researcher Affiliation | Academia | Diyang Li1, Charles X. Ling2, Zhiqiang Xu3, Huan Xiong3 & Bin Gu3, 1Cornell University 2Western University 3Mohamed bin Zayed University of Artificial Intelligence |
| Pseudocode | Yes | Algorithm 1 SAGO Algorithm |
| Open Source Code | Yes | To ensure the replicability, Python codes corresponding to the pivotal components of the proposed algorithms are incorporated within the supplementary materials. |
| Open Datasets | Yes | Dataset We employ real-world datasets from Open ML (Vanschoren et al., 2014) and UCI repository (Asuncion & Newman, 2007) for our simulations. |
| Dataset Splits | Yes | We randomly partition the datasets into training, validation, and testing sets, with 70%, 15%, and 15% of the total samples, respectively. |
| Hardware Specification | Yes | All experiments presented in this study were conducted on a workstation running the Ubuntu 18.04 operating system, equipped with Intel 2.30GHz CPU 200 and 400.0GB of RAM. |
| Software Dependencies | No | The paper mentions 'Python 3.7' as the implementation language and libraries like 'Num Py and Sci Py', 'Scikit-learn', and 'Hyperopt' without specifying version numbers for these libraries. |
| Experiment Setup | Yes | The parameterizers are set to µ (t) =4t2, ν (s) = 1 s2, respectively. ... The convergence tolerance ε for batch training is 1e-7 and the tolerance ϵ for hyperparameter in the outer-level problem is 1e-4. |