Quantum Perceptron Models

Authors: Ashish Kapoor, Nathan Wiebe, Krysta Svore

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We develop two quantum algorithms for perceptron learning. The first algorithm exploits quantum information processing to determine a separating hyperplane using a number of steps sublinear in the number of data points N, namely O( N). The second algorithm illustrates how the classical mistake bound of O( 1 γ2 ) can be further improved to O( 1 γ ) through quantum means, where γ denotes the margin. Such improvements are achieved through the application of quantum amplitude amplification to the version space interpretation of the perceptron model.
Researcher Affiliation Industry Nathan Wiebe Microsoft Research Redmond WA, 98052 nawiebe@microsoft.com Ashish Kapoor Microsoft Research Redmond WA, 98052 akapoor@microsoft.com Krysta M Svore Microsoft Research Redmond WA, 98052 ksvore@microsoft.com
Pseudocode No The paper refers to 'Algorithm 2' but does not present its steps in a structured pseudocode block or algorithm listing.
Open Source Code No The paper does not provide any statement about making its code open source or links to code repositories.
Open Datasets No The paper refers to 'training examples' and 'training set' generically (e.g., 'N separable training examples {φ1, .., φN}'), but does not specify a concrete, named public dataset with access information (e.g., a link, DOI, or formal citation).
Dataset Splits No The paper does not specify any training/validation/test dataset splits or mention a validation set.
Hardware Specification No The paper is theoretical and does not describe any experimental setup that would require hardware specifications.
Software Dependencies No The paper does not mention any software dependencies or specific version numbers.
Experiment Setup No The paper is theoretical and does not include details about an experimental setup, such as hyperparameters or system-level training settings.