MG-Net: Learn to Customize QAOA with Circuit Depth Awareness

Authors: Yang Qian, Xinbiao Wang, Yuxuan Du, Yong Luo, Dacheng Tao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Systematic simulations, encompassing Ising models and weighted Max-Cut instances with up to 64 qubits, substantiate our theoretical findings, highlighting MG-Net s superior performance in terms of both approximation ratio and efficiency. Extensive experiments on the Transverse-field Ising model and Max-Cut up to 64 qubits verify our theoretical discoveries and demonstrate the advantage of MG-Net in achieving higher approximation ratios at various circuit depths compared to other quantum and traditional methods.
Researcher Affiliation Academia 1School of Computer Science, Faculty of Engineering, University of Sydney New South Wales 2008, Australia 2Institute of Artificial Intelligence, School of Computer Science, Wuhan University Wuhan, China 3College of Computing and Data Science, Nanyang Technological University Singapore 639798, Singapore
Pseudocode Yes Algorithm 1 Greedy Algorithm for weighted Max-Cut; Algorithm 2 Goemans-Williamson Algorithm for Max-Cut; Algorithm 3 Construction of parameter group pool
Open Source Code Yes The code is released at https://github.com/QQQYang/MG-Net.
Open Datasets No The paper describes the construction of their own dataset for Max-Cut and TFIM problems by sampling parameters and problem instances. It does not provide access information (link, DOI, or formal citation) to a publicly available or open dataset that they used for training. For example, Appendix D.1 states: "The training dataset DTr ce in Sec. 4.3 contains S = 100 instances for both two tasks with size up to N = 64 qubits" and "To find the minimal cost that can be achieved by a QAOA circuit during the construction of the training dataset in stage 1, we run the same QAOA circuit 10 times and record their cost values."
Dataset Splits No The paper specifies a training dataset (DTr ce) and a test dataset (DTe) but does not explicitly mention a separate validation dataset or its split percentages/counts. Section 5.1 mentions: "The training dataset DTr ce in Sec. 4.3 contains S = 100 instances for both two tasks with size up to N = 64 qubits, while The test dataset DTe contains another 100 problem instances which are different from that of DTr ce."
Hardware Specification Yes All QAOA circuits are implemented by Penny Lane [64] and run on classical device with Intel(R) Xeon(R) Gold 6267C CPU @ 2.60GHz and 128 GB memory. MG-Net is implemented by Pytorch [65] and is trained on a single NVIDIA Ge Force RT 2080Ti with 12G graphics memory.
Software Dependencies No The paper mentions software like "Penny Lane [64]" and "Pytorch [65]" and "Adam optimizer", but it does not specify exact version numbers for these software packages or libraries, which would be necessary for precise reproducibility. It also mentions "Python" but without a version.
Experiment Setup Yes The hyper-parameters of optimizing MG-Net and QAOA circuit are listed in Tab. 3. QAOA: optimizer Adam, learning rate 0.15, epoch 40. MG-Net: optimizer Adam, learning rate 1e-4, epoch 250, lambda_e 1.0, lambda_r 1.0.