Fast Sparse Group Lasso
Authors: Yasutoshi Ida, Yasuhiro Fujiwara, Hisashi Kashima
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our algorithm enhances the efficiency of the original algorithm without any loss of accuracy. |
| Researcher Affiliation | Collaboration | Yasutoshi Ida1,3 Yasuhiro Fujiwara2 Hisashi Kashima3,4 1NTT Software Innovation Center 2NTT Communication Science Laboratories 3Kyoto University 4RIKEN AIP |
| Pseudocode | Yes | Algorithm 1 Fast Sparse Group Lasso |
| Open Source Code | No | The paper does not contain any explicit statement about making the source code available or provide a link to a code repository. |
| Open Datasets | Yes | We evaluated the processing time and prediction error of our approach by conducting experiments on six datasets from the LIBSVM website (abalone, cpusmall, boston, bodyfat, eunite2001, and pyrim). |
| Dataset Splits | Yes | We split the data into training and test data for each dataset. That is, 50% of a dataset was used as test data for evaluating the prediction error in terms of the squared loss for the response. |
| Hardware Specification | Yes | All the experiments were conducted on a Linux 2.20 GHz Intel Xeon server with 264 GB of main memory. |
| Software Dependencies | No | The paper mentions running experiments on 'Linux' but does not provide specific version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, scikit-learn, etc.). |
| Experiment Setup | Yes | We tuned λ for all approaches based on the sequential rule by following the methods in [18, 12 14]. The search space was a non-increasing sequence of Q parameters (λq)Q 1 q=0 defined as λq = λmax10 δq/Q 1. We used δ = 4 and Q = 100 [18, 12 14]. For another tuning parameter α, we used the settings α [0.2, 0.4, 0.6, 0.8]. We stopped the algorithm for each λq when the relative tolerance ||β βnew||2/||βnew||2 dropped below 10 5 for all approaches [9, 10]. |