Regularization in directable environments with application to Tetris
Authors: Jan Malte Lichtenberg, Özgür Şimşek
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Across a wide range of learning problems, including Tetris, STEW outperformed existing linear models, including ridge regression, the Lasso, and the non-negative Lasso, when feature directions were known. Our empirical analysis shows that these properties translate from the equal-weights model to STEW. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Bath, Bath, United Kingdom. |
| Pseudocode | Yes | The pseudo-code is provided in the Supplementary Material. |
| Open Source Code | No | The paper mentions pseudocode in supplementary material, but does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We first consider the Rent data set (Tutz, 2011) where the problem is to estimate the response rent per m2 for 2053 apartments based on 10 features. In the Diabetes data set, in which a quantitative measure of disease progression of 442 diabetes patients needs to be predicted based on age, sex, body mass index, average blood pressure, and six blood serum measurements |
| Dataset Splits | No | The paper mentions tuning regularization strength using cross-validation, but does not specify explicit training/test/validation dataset splits (e.g., percentages or sample counts) in the main text. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running experiments were mentioned. |
| Software Dependencies | No | The paper does not specify software names with version numbers for libraries or tools used in the experiments (e.g., 'scikit-learn', 'PyTorch'). |
| Experiment Setup | Yes | We used a board size of 10 10, with rollout parameters M = 7, T = 10. Multinomial logistic regression in iteration k used the most recent n(k) training samples, where n(k) = min(50, k/2 + 2). The regularization strength λ was tuned using cross-validation. Eight features were used to describe a state-action pair: landing height, number of eroded piece cells, row transitions, column transitions, number of holes, number of board wells, hole depth, and number of rows with holes. |