Symmetric Pruning in Quantum Neural Networks
Authors: Xinbiao Wang, Junyu Liu, Tongliang Liu, Yong Luo, Yuxuan Du, Dacheng Tao
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive numerical simulations are conducted to validate the analytical results of EQNTK and the effectiveness of SP. (Abstract) ... We carry out numerical simulations to explore the theoretical properties of EQNTK and validate the effectiveness of the SP scheme in GSP. (Section 4, Experiments) ... We utilize three metrics to assess the convergence rate of QNNs, i.e., (1) the loss value L(θ(T )) at the convergence stage; (2) the number of iteration steps T(ϵ) T required to achieve the ϵ-convergence; (3) the minimum number of parameterized gates required to achieve ϵ-convergence, which can also be interpreted as the threshold to achieve the over-parameterization regime. (Section 4, Evaluation metrics) |
| Researcher Affiliation | Collaboration | Xinbiao Wang1,2, Junyu Liu3,4,5,6, Tongliang Liu7, Yong Luo1,8,9, Yuxuan Du2, , Dacheng Tao2 1Institute of Artificial Intelligence, School of Computer Science, Wuhan University, China 2JD Explore Academy 3Pritzker School of Molecular Engineering, The University of Chicago 4Chicago Quantum Exchange 5Kadanoff Center for Theoretical Physics 6q Braid Co. 7Sydney AI Centre, The University of Sydney, 8National Engineering Research Center for Multimedia Software, School of Computer Science, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, China 9Hubei Luojia Laboratory, Wuhan, China |
| Pseudocode | Yes | Algorithm 1: Symmetric pruning (SP) (Section 3) |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper describes the Transverse-field Ising model (TFIM) and Maximum Cut (Max Cut) problem, and for Max Cut mentions generating Erdos-Renyi graphs by "randomly connecting any pair nodes among n nodes with probability p = 0.6". While these are standard problem formulations, the paper does not provide a specific link, DOI, repository, or citation for accessing the *exact* datasets or instances used in their simulations. |
| Dataset Splits | No | The paper describes using specific problem Hamiltonians and an optimization process but does not specify traditional training, validation, and test dataset splits with percentages or counts, as is common in supervised learning tasks. The evaluation focuses on convergence rates and parameter reduction for Ground State Preparation, which is a different paradigm. |
| Hardware Specification | No | The paper discusses quantum hardware concepts (NISQ era, quantum machines) but does not specify the classical computing hardware (e.g., CPU, GPU models) used for running the extensive numerical simulations. |
| Software Dependencies | No | The paper mentions the "Adam optimizer" and refers to the "nauty" package for graph automorphism and the "RTNI package (Fukuda et al., 2019)" but does not specify version numbers for any of these software components. |
| Experiment Setup | Yes | The Adam optimizer where the learning rate is 0.001 and the rest hyper-parameters follow the default settings. The training of QNNs stops when the loss value is less than 10 8 or when the change in the loss function is less than 10 8 three times in a row. The maximum number of iterations is set as T = 10000. The ϵ value in Definition 1 is set as 10 5 for both TFIM and Max Cut. (Section 4, Initialization of QNNs) ... The learning rate η and the maximum number of iteration T is set as 10 4 and 1000, respectively. (Appendix G, Training dynamics analysis of symmetric ansatz) |