Optimizing Tensor Network Contraction Using Reinforcement Learning
Authors: Eli Meirom, Haggai Maron, Shie Mannor, Gal Chechik
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments indicate that TNCO-RL outperforms stateof-the-art baselines (e.g., graph-partitioning based methods) in a variety of experimental settings on both synthetic tensor networks and tensor networks that originate from real quantum circuits (Arute et al., 2019).7. Experiments We conducted experiments on three types of networks: synthetic networks ( 100 tensors), Sycamore circuit networks (100 400) and Max-cut (> 5000) networks. The latter two networks originate from real quantum circuits. |
| Researcher Affiliation | Industry | Eli Merom 1 Haggai Maron 1 Shie Mannor 1 Gal Chechick 1 1Nvidia research, Israel. |
| Pseudocode | No | The paper refers to an algorithm from a cited work ('Algorithm 1, pp. 12) (Battaglia et al., 2018)') but does not contain its own pseudocode or algorithm block for the proposed method. |
| Open Source Code | Yes | Code: https:// nv-research-tlv.netlify.app/publication/ tensor_contraction/. Correspondence to: Eli Merom <emeirom@nvidia.com>. |
| Open Datasets | Yes | Following (Gray and Kourtis, 2021) we generate graphs using the OPT-Einsum package (Daniel et al., 2018) with an average degree d = 3 and tensor extents that are sampled i.i.d. from a uniform distribution on {2, 3, 4, 5, 6}. The Sycamore and Max-cut are on the frontline of the quantum supremacy regime, and are simulated in supercomputers. The number of both Sycamore circuits and max-cut graphs is very low (four and one, respectively), and therefore we used the single-network approach in these TNs. |
| Dataset Splits | No | The paper mentions training and testing on synthetic networks and applying to new networks, but it does not specify a validation dataset split percentage or method. For synthetic networks, it states: 'In all experiments, we train on randomly generated tensor networks and test on a set consisting of specific 100 equations.' |
| Hardware Specification | Yes | We used NVIDIA DGX-V100 for all experiments. |
| Software Dependencies | No | The paper mentions 'py Torch Geometric (Fey and Lenssen, 2019)' and 'Stable Baselines (Raffin et al., 2019)' but does not provide specific version numbers for these software libraries. |
| Experiment Setup | Yes | A.1. Hyperparameters Table 5 lists the hyperparameter values that were used for our trainable modules. |