Extrapolation Towards Imaginary 0-Nearest Neighbour and Its Improved Convergence Rate

Authors: Akifumi Okuno, Hidetoshi Shimodaira

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We theoretically prove that the MS-k-NN attains the improved rate, which coincides with the existing optimal rate under some conditions. Numerical experiments are conducted for performing MS-k-NN.
Researcher Affiliation Collaboration Akifumi Okuno1,3 and Hidetoshi Shimodaira2,3 1School of Statistical Thinking, The Institute of Statistical Mathematics 2Graduate School of Informatics, Kyoto University 3RIKEN Center for Advanced Intelligence Project
Pseudocode No The paper describes a minimization problem formally: 'ˆb := arg min b RC+1 ˆη(k NN) n,kv (X ) b0 c=1 bcr2c v' but this is a mathematical expression, not a pseudocode or algorithm block.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We employ datasets from UCI Machine Learning Repository (Dua & Graff, 2017).
Dataset Splits Yes Feature vectors are first normalized, and then randomly divided into 70% for prediction (npred = 0.7n ) and the remaining for test query.
Hardware Specification No The paper does not specify any particular hardware used for running the experiments (e.g., CPU, GPU models, or cloud computing instances).
Software Dependencies No The paper does not mention specific software names with version numbers required to replicate the experiments.
Experiment Setup Yes Parameter tuning: For unweighted and weighted k-NN, we first fix k := V n4/(4+d) pred n4/(4+d) pred . Using the same k, we simply choose k1 := k/V, k2 = 2k/V, . . . , k V = k with V = 5 for MS-k-NN. Regression in MS-k-NN is ridge regularized with the coefficient λ = 10 4.