Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels

Authors: Xuchen You, Shouvanik Chakrabarti, Boyang Chen, Xiaodi Wu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments: sublinear QNN convergence To support Theorem 3.2, we simulate the training of QNNs using M0 with eigenvalues {±1}.
Researcher Affiliation Collaboration 1Department of Computer Science, University of Maryland, College Park, United States 2Global Technology Applied Research, J. P. Morgan Chase & Co. 3Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Methods are described in prose and mathematical formulations.
Open Source Code No The paper does not provide concrete access to source code (e.g., a specific repository link or explicit code release statement) for the methodology described.
Open Datasets No A m-sample dataset is generated by randomly sampled m orthogonal pure states {vi}m j=1 Cd and randomly assigned half of the samples with label +1 and the other half label -1 (i.e. {yi}m j=1 {±1}m).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets.
Hardware Specification Yes We run the experiments on Amazon EC2 C5 Instances. The simulation of the asymptotic dynamics is run on Intel Core i7-7700HQ Processor (2.80Ghz) with 16G memory.
Software Dependencies No We simulate the QNN experiments using Pytorch (Paszke et al., 2019). The paper mentions PyTorch but does not provide a specific version number or other software dependencies with version numbers.
Experiment Setup Yes To simulate the dynamics of gradient flow, we choose the learning rate to be 0.001/p and the maximum number of epochs is set to be 10000.