Sketched Iterative Algorithms for Structured Generalized Linear Models
Authors: Qilong Gu, Arindam Banerjee
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we show the experimental results of our algorithms on synthetic dataset, and how the choice of m affects computational efficiency and statistical guarantee. |
| Researcher Affiliation | Academia | Qilong Gu and Arindam Banerjee Department of Computer Science & Engineering, University of Minnesota, Twin-Cities {guxxx396, banerjee}@cs.umn.edu |
| Pseudocode | Yes | Algorithm 1: Sketched Projected Gradient Descent (S-PGD) |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability, nor does it include links to a code repository. |
| Open Datasets | No | We draw design matrix X Rn p randomly from Gaussian distribution. We choose parameter θ to be an s-sparse vector, non-zero entries of θ are drawn from standard Gaussian distribution. Response y is given by y = Xθ + σ w, where σ > 0 is a constant and w is drawn from standard Gaussian N(0, 1). |
| Dataset Splits | No | The paper describes generating synthetic datasets for experiments but does not provide specific details on train/validation/test splits, percentages, or sample counts for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU/CPU models, memory, or cloud computing instances. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks with versions) used for the experiments. |
| Experiment Setup | No | The paper mentions parameters like sample size (n), dimension (p), sketching dimension (m), and sparsity (s) for synthetic data generation, and the number of iterations (e.g., 900), but does not provide concrete hyperparameter values for the training process such as the specific learning rate used, batch size, or optimizer settings for experimental runs. |