Gibbs Sampling of Continuous Potentials on a Quantum Computer

Authors: Arsalan Motamedi, Pooya Ronagh

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Additionally, as concrete numerical demonstrations, Figs. 2 and 3 showcase the error of our interpolation approach applied on the functions considered in Examples A.2 and A.3.
Researcher Affiliation Collaboration 1Institute for Quantum Computing, University of Waterloo, Waterloo, ON, Canada 2Department of Physics & Astronomy, University of Waterloo, Waterloo, ON, Canada 3Perimeter Institute for Theoretical Physics, Waterloo, ON, Canada 4Irreversible, Vancouver, BC, Canada.
Pseudocode Yes Algorithm 1 Pseudocode of our Gibbs sampling algorithm.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets No The paper does not refer to any specific publicly available datasets with concrete access information for training or evaluation. It discusses theoretical properties of functions and uses
Dataset Splits No The paper does not specify any dataset splits (training, validation, test) as it focuses on theoretical algorithm development and complexity analysis, not empirical evaluation on specific datasets with standard splits.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments. It is a theoretical paper focusing on quantum algorithms and complexity analysis.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper discusses theoretical parameters (e.g., semi-analyticity parameters C and a, N, M, T as inputs to Algorithm 1) relevant to its mathematical framework and complexity analysis. However, it does not provide specific experimental setup details like hyperparameters, optimizers, or system-level training settings as typically found in empirical machine learning papers.