MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization
Authors: Ian En-Hsu Yen, Wei-Cheng Lee, Kai Zhong, Sung-En Chang, Pradeep K. Ravikumar, Shou-De Lin
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our numerical experiments on mixtures of linear as well as nonlinear regressions, the proposed method yields high-quality solutions in a wider range of settings than existing approaches. 6 Experiments In this section, we compare the proposed Mix Lasso method with other state-of-the-art approaches listed as follows. |
| Researcher Affiliation | Collaboration | Carnegie Mellon University Snap Inc. National Taiwan University Amazon Inc. |
| Pseudocode | Yes | Algorithm 1 A Greedy Algorithm for Mix Lasso (6) |
| Open Source Code | No | The paper does not contain any statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | No | We generate 14 synthetic data sets according to the model: yi = PK k=1 zikfk(x) + ωi, i [N] and a Stock data set that contains the mixed stock prices of IBM, Facebook, Microsoft and Nvidia of span 300 weeks till the Feb. of 2018. No access information is provided for either. |
| Dataset Splits | No | The paper refers to 'training observation' in Section 4.3 and discusses 'sample complexity' (N), but it does not specify any explicit train/validation/test splits, percentages, or methodology for partitioning the data for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory, or cloud resources) used to conduct the experiments. |
| Software Dependencies | No | The paper mentions using existing implementations and solvers (e.g., 'We adopt implementation provided by the author of [7]', 'our implementation of the SDP solver [17]'), but it does not list any specific software dependencies or their version numbers. |
| Experiment Setup | Yes | EM-Random: A standard EM algorithm that alternates between minimizing {zi}N i=1 and {fk(x)}K k=1 until convergence, with random initialized W N(0, I) in the linear case and random initialized Z Multinoulli(1/K) in the nonlinear case. |