Online Learning of Quantum States
Authors: Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, Ashwin Nayak
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We give three different ways to prove our results using convex optimization, quantum postselection, and sequential fat-shattering dimension which have different advantages in terms of parameters and portability. |
| Researcher Affiliation | Collaboration | Scott Aaronson UT Austin aaronson@cs.utexas.edu Xinyi Chen Google AI Princeton xinyic@google.com Elad Hazan Princeton University and Google AI Princeton ehazan@cs.princeton.edu Satyen Kale Google AI, New York satyenkale@google.com Ashwin Nayak University of Waterloo ashwin.nayak@uwaterloo.ca |
| Pseudocode | Yes | Algorithm 1 RFTL for Quantum Tomography |
| Open Source Code | No | The paper does not mention or provide any links to open-source code for the described methodologies. |
| Open Datasets | No | The paper presents theoretical results and algorithms; it does not involve empirical training on publicly available datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments requiring dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper mentions that algorithms have 'run time exponential in the number of qubits in each iteration, but are entirely classical,' which refers to theoretical complexity, not specific hardware used for experiments. |
| Software Dependencies | No | The paper describes theoretical algorithms and proofs; it does not specify any software dependencies or versions. |
| Experiment Setup | No | The paper is theoretical, presenting algorithms and proofs rather than empirical experiments, and therefore does not include details on hyperparameters, training configurations, or other system-level settings for an experimental setup. |