Equivariant Quantum Graph Circuits
Authors: Peter Mernyei, Konstantinos Meichanetzidis, Ismail Ilkan Ceylan
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically verify the expressive power of EQGCs through a dedicated experiment on synthetic data, and additionally observe that the performance of EQGCs scales well with the depth of the model and does not suffer from barren plateu issues. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, University of Oxford, Oxford, UK. 2Charm Therapeutics, London, UK. 3Cambridge Quantum Computing and Quantinuum, Oxford, UK. |
| Pseudocode | No | The paper describes algorithms and constructions in prose and mathematical notation but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not include an explicit statement about releasing code for the methodology or a direct link to a source-code repository. |
| Open Datasets | No | The paper mentions creating a 'synthetic dataset of 6 to 10-node graphs' but does not provide any information about its public availability, such as a link, DOI, or a formal citation. |
| Dataset Splits | No | The paper states, '8-cycle graphs were reserved for evaluation, while all others were used for training.' This describes a training and evaluation (test) split, but not a distinct validation split with specific percentages or counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cluster specifications) used for running the experiments. It only mentions 'simulating quantum computers classically'. |
| Software Dependencies | No | The paper mentions 'the Adam optimizer was used' but does not specify its version number or any other software dependencies with version information. |
| Experiment Setup | Yes | Each node state was initialized as |+ = 1/√2, then an equal number k ∈ {1, . . . , 14} general node and edge layers were applied alternatingly. After measurement, the fraction of observed |1 s was used to predict the input s class through a learnable nonlinearity. Exact probabilities of possible outcomes were calculated, and the Adam optimizer was used to minimize the expected binary cross-entropy loss for 100 epochs, with an initial learning rate of 0.01 and an exponential learning rate decay with coefficient 0.99 applied at each epoch. |