Latent Feature Lasso
Authors: Ian En-Hsu Yen, Wei-Cheng Lee, Sung-En Chang, Arun Sai Suggala, Shou-De Lin, Pradeep Ravikumar
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we compare our proposed method with other state-of-the-art approaches on both synthetic and real data sets. The dataset statistics are listed in Table 1. |
| Researcher Affiliation | Academia | 1Carnegie Mellon University, U.S.A. 2National Taiwan University, Taiwan. |
| Pseudocode | Yes | Algorithm 1 A Greedy Algorithm for Latent Feature Lasso |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of their own methodology. It only mentions using implementations from other authors for comparison. |
| Open Datasets | Yes | For real data, we use a benchmark Tabletop data set constructed by (Griffiths & Ghahramani, 2005), where there is a ground-truth number of features K = 4 for the 4 objects on the table. We also take two standard multilabel (multiclass) classification data sets Yeast and Mnist1k from the LIBSVM repository 1, and one Face data set Yale Face from the Yale Face database 2. 1https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/ 2http://vision.ucsd.edu/content/yale-face-database |
| Dataset Splits | No | The paper mentions using datasets but does not explicitly provide details about train/validation/test splits, percentages, or specific splitting methodologies. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers for their own implementation (e.g., specific libraries, frameworks, or solvers). |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs), optimizer settings, or other configuration specifics for the Latent Feature Lasso algorithm. |