Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Generalization of Hamiltonian algorithms

Authors: Andreas Maurer

NeurIPS 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The paper focuses on mathematical derivations, theorems (e.g., Theorem 3.4, Theorem 3.5, Theorem 3.8, Theorem 4.1, Theorem 4.2, Theorem 4.3), propositions (e.g., Proposition 3.1, Proposition 3.2), and lemmas (e.g., Lemma 3.3, Lemma 3.6, Lemma A.1, Lemma A.2, Lemma A.4, Lemma A.5, Lemma A.6, Lemma B.1, Lemma B.7). It presents a theoretical framework for bounding generalization gaps for stochastic learning algorithms without conducting empirical studies or experiments. The NeurIPS checklist also confirms: "the paper does not include experiments."
Researcher Affiliation Academia Andreas Maurer Computational Statistics and Machine Learning Istituto Italiano di Tecnologia, 16163 Genoa, Italy EMAIL
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It is a theoretical paper focusing on mathematical proofs and derivations.
Open Source Code No The paper does not provide any information about open-source code for the methodology described. As indicated by the NeurIPS checklist, the paper does not include experiments, and thus no code for implementation is provided.
Open Datasets No The paper is theoretical and does not involve empirical training or the use of specific datasets for experimentation. It discusses 'a space X of data' and 'a distribution ยต on a space X of data' in a theoretical context, but does not refer to specific, publicly available datasets for model training.
Dataset Splits No The paper is theoretical and does not describe experimental validation procedures involving dataset splits (training, validation, or test sets).
Hardware Specification No The paper is theoretical and does not describe any experimental setup, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not involve an experimental setup that would require detailing specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not include any experimental setup details, such as hyperparameters or system-level training settings.