Minimizing Quadratic Functions in Constant Time

Authors: Kohei Hayashi, Yuichi Yoshida

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical performance (accuracy and runtime) is positively confirmed by numerical experiments. In this section, we demonstrate the effectiveness of our method by experiment.
Researcher Affiliation Collaboration Kohei Hayashi National Institute of Advanced Industrial Science and Technology hayashi.kohei@gmail.com Yuichi Yoshida National Institute of Informatics and Preferred Infrastructure, Inc. yyoshida@nii.ac.jp
Pseudocode Yes Algorithm 1 Input: An integer n N, query accesses to the matrix A Rn n and to the vectors d, b Rn, and ϵ, δ > 0 1: S a sequence of k = k(ϵ, δ) indices independently and uniformly sampled from {1, 2, . . . , n}. 2: return n2 k2 minv Rn pk,A|S,d|S,b|S(v).
Open Source Code Yes 1The program codes are available at https://github.com/hayasick/CTOQ.
Open Datasets No We randomly generated the data sets as xi N(1, 0.5) for i [n] and x j N(1.5, 0.5) for j [n ] where N(µ, σ2) denotes the Gaussian distribution with mean µ and variance σ2. (No specific public dataset, link, or citation for the dataset itself provided.)
Dataset Splits Yes σ2 and λ were chosen by 5-fold cross-validation as suggested in [21].
Hardware Specification Yes All experiments were conducted on an Amazon EC2 c3.8xlarge instance.
Software Dependencies No 2We used GLPK (https://www.gnu.org/software/glpk/) for the QP solver. (No version number for GLPK provided.)
Experiment Setup Yes In this experiment, we used the Gaussian kernel φ(x, y) = exp((x y)2/2σ2) and set n = 200 and α = 0.5; σ2 and λ were chosen by 5-fold cross-validation as suggested in [21]. We randomly generated inputs as Aij U[ 1,1], di U[0,1], and bi U[ 1,1] for i, j [n], where U[a,b] denotes the uniform distribution with the support [a, b].