Learning with little mixing
Authors: Ingvar Ziemann, Stephen Tu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Appendix A, we show experimentally, using the stable GLM model, that the trends predicted by our theory are indeed realized in practice. |
| Researcher Affiliation | Collaboration | Ingvar Ziemann KTH Royal Institute of Technology ziemann@kth.se Stephen Tu Robotics at Google stephentu@google.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions JAX (a Python library) and provides its URL, but this is a tool used by the authors, not their own source code for the methodology described in the paper. There is no explicit statement about releasing their own code or a link to a repository containing it. |
| Open Datasets | No | The paper does not specify the use of any publicly available or open datasets by name, URL, or formal citation for its experiments. It discusses theoretical models (LDS, GLM) and refers to experiments in Appendix A, but no dataset information is provided in the main text. |
| Dataset Splits | No | The paper does not provide specific details regarding training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory specifications, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions JAX by name with a URL, but it does not provide specific version numbers for JAX or any other software libraries, environments, or solvers used in their experiments. Therefore, it does not provide a reproducible description of ancillary software. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or other system-level training settings in the main text. It focuses on the theoretical framework and general model types. |