Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Going Beyond Linear RL: Sample Efficient Neural Function Approximation
Authors: Baihe Huang, Kaixuan Huang, Sham Kakade, Jason D. Lee, Qi Lei, Runzhe Wang, Jiaqi Yang
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our first result is a computationally and statistically efficient algorithm in the generative model setting under completeness for two-layer neural networks. Our second result considers this setting but under only realizability of the neural net function class. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] |
| Researcher Affiliation | Collaboration | Baihe Huang1 , Kaixuan Huang2 , Sham M. Kakade3,4 , Jason D. Lee2 Qi Lei2 , Runzhe Wang2 , Jiaqi Yang5 1Peking University 2Princeton University 3Harvard University 4Microsoft Research 5Tsinghua University |
| Pseudocode | Yes | Algorithm 1 Learning realizable Q with deterministic transition ... Algorithm 5 Dynamic programming under online RL settings |
| Open Source Code | No | If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] |
| Open Datasets | No | The paper is theoretical and does not report on experiments using datasets. All experiment-related checklist items are marked as N/A. |
| Dataset Splits | No | The paper is theoretical and does not report on experiments using datasets, thus no training/validation/test splits are provided. The checklist indicates '[N/A]' for training details including data splits. |
| Hardware Specification | No | If you ran experiments... (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] |
| Software Dependencies | No | The paper is theoretical and does not describe experimental setup requiring specific software dependencies with version numbers. There are no mentions of software versions. |
| Experiment Setup | No | If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] |