Neural Auto-designer for Enhanced Quantum Kernels
Authors: Cong Lei, Yuxuan Du, Peng Mi, Jun Yu, Tongliang Liu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive numerical simulations on different datasets, we demonstrate the superiority of our proposal over prior methods, especially for the capability of eliminating the kernel concentration issue and identifying the feature map with prediction advantages. Our work not only unlocks the potential of quantum kernels for enhancing real-world tasks but also highlights the substantial role of deep learning in advancing quantum machine learning. |
| Researcher Affiliation | Collaboration | Cong Lei1 Yuxuan Du2, Peng Mi1 Jun Yu3 Tongliang Liu1, 1Sydney AI Centre, School of Computer Science, The University of Sydney 2JD Explore Academy 3Department of Automation, University of Science and Technology of China : corresponding authors (duyuxuan123@gmail.com, tongliang.liu@sydney.edu.au) |
| Pseudocode | Yes | Algorithm 1: Pseudo code of Qu Ker Net. |
| Open Source Code | Yes | The source code for our implementation will be made publicly available on Git Hub repository. https://github.com/tmllab/2024_ICLR_Qu Ker Net. |
| Open Datasets | Yes | The first dataset utilized is a tailored version of the MNIST dataset (Lecun & Bottou, 1998), consisting of handwritten digit images. The second dataset employed is a tailored Credit Card (CC) dataset (Dal Pozzolo et al., 2017), commonly used for fraud detection and risk assessment in the financial industry. |
| Dataset Splits | Yes | Furthermore, PCA is used to reduce the dimensionality of the data to 8 dimensions. In this case, M = 500, N = 4, L0 {1, 2, 3}, |S| = 20000, k = 10, |Θ| = 10. All experiments were run three times with different random seeds (γ {1, 2, 3}/(p Var[x(i) j ]) for RBFK). Experimental results are shown in Fig. 14(a). Fig. 14(a) shows that the performance gaps of the kernels between the noise and noiseless cases identified by Qu Ker Net are not greater than 3.5% (i.e., 61.67% versus 63.33% for Qu Ker Net-1, 70% versus 73.33% for Qu Ker Net-2). Although HEAK does not show any difference in performance between the noise and noiseless conditions, its performance is significantly lower than the kernels selected by Qu Ker Net (i.e., 40% versus 70% in noise situations). These results indicate that Qu Ker Net has good adaptability to real noise conditions and can be effectively used on practical devices. |
| Hardware Specification | Yes | All experiments are run on AMD EPYC 7302 16-Core Processor (3.0GHz) with 188G memory (Ubuntu system). |
| Software Dependencies | No | All simulations are conducted using Python, utilizing the Penny Lane (Bergholm et al., 2018), Py Torch (Paszke et al., 2019), and the JAX library (Bradbury et al., 2018). |
| Experiment Setup | Yes | Here we only introduce the general hyper-parameter settings and defer the specific hyper-parameter settings to the corresponding experiments. Except for the KTA experiment, N = 8, L0 {1, 2, 3, 4, 5}, M = 50000, k = 10, |Θ| = 20. And the neural predictor is optimized by Adam with 0.01 learning rate for 30 epochs, and the criterion used is Smooth L1 Loss. Each setting is repeated with 5 times to collect the statistical results. |