Fidelity-based Deep Adiabatic Scheduling

Authors: Eli Ovits, Lior Wolf

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We benchmark our approach on random QUBO problems, Grover search, 3-SAT, and MAX-CUT problems and show that our approach outperforms, by a sizable margin, the linear schedules as well as alternative approaches that were very recently proposed.
Researcher Affiliation Academia Eli Ovits & Lior Wolf Tel Aviv University
Pseudocode No The paper describes steps for solving the Schr odinger equation in Appendix C but does not present a formal pseudocode block or algorithm labeled as such.
Open Source Code No The paper does not provide any statement or link indicating that its source code is publicly available.
Open Datasets No In order to train the QUBO problem model, we produced a training dataset of 10,000 random QUBO instances for each problem size: n = 6, 8, 10. The QUBO problems were generated by sampling independently, from the normal distribution, each coefficient of the problem matrix Q.
Dataset Splits No The paper mentions 'Training loss Validation loss' in Figure 12 in Appendix G, implying a validation set was used, but it does not specify the explicit split percentages or how the validation set was created within the main text.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using the Adam optimizer and SELU activation function, along with batch normalization, but does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes The training was performed using the Adam optimizer (Kingma & Ba, 2014), with batches of size 200. Batch normalization (Ioffe & Szegedy, 2015) was applied during training. A uniform dropout value of 0.1 is employed for all layers during the model training.