Parametric Simplex Method for Sparse Learning
Authors: Haotian Pang, Han Liu, Robert J. Vanderbei, Tuo Zhao
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present some numerical experiments and give some insights about how the parametric simplex method solves different linear programming problems. ... Thorough numerical experiments are provided to demonstrate the outstanding performance of the PSM method. |
| Researcher Affiliation | Collaboration | Princeton University ?Tencent AI Lab Northwestern University Georgia Tech |
| Pseudocode | Yes | Algorithm 1: The parametric simplex method |
| Open Source Code | No | The paper refers to 'R package flare (Li et al., 2015)' as a third-party tool used for comparison, but there is no statement or link indicating that the authors' own implementation code for the described methodology is open-source or provided. |
| Open Datasets | No | The paper describes generating synthetic data for its experiments: 'The entries of X are generated from an array of independent Gaussian random variables...' and 'We generate 0 x = U > U...'. There is no mention of or link to a publicly available or open dataset. |
| Dataset Splits | No | The paper describes synthetic data generation but does not provide specific training/validation/test dataset splits or mention cross-validation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'R package flare' but does not provide specific version numbers for it or any other software dependencies used in their own implementation. |
| Experiment Setup | Yes | We stop the parametric simplex method when λ σn log d/n. ... The design matrix X has n = 100 rows and d = 250 columns. ... We randomly select s = 8 entries from the response vector 0, and set them as 0 i = si(1 + ai), where si = 1 or 1, with probability 1/2 and ai N(0, 1). ... We form y = X 0 + , where i N(0, σ), with σ = 1. ... We fix the sample size n to be 200 and vary the data dimension d from 100 to 5000. ... The corresponding sample covariance matrices SX and SY are also computed based on the data. ... When d = 25, 50 and 100, the sparsity level of D1 is set to be 0.02 and when d = 150 and 200, the sparsity level of D1 is set to be 0.002. |