An adaptive nearest neighbor rule for classification

Authors: Akshay Balsubramani, Sanjoy Dasgupta, yoav Freund, Shay Moran

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide theory and experiments that demonstrate that the algorithm performs comparably to, and sometimes better than, k-NN with an optimal choice of k. We performed a few experiments using real-world data sets from computer vision and genomics (see Section C).
Researcher Affiliation Academia Akshay Balsubramani abalsubr@stanford.edu Sanjoy Dasgupta dasgupta@eng.ucsd.edu Yoav Freund yfreund@eng.ucsd.edu Shay Moran shaym@princeton.edu
Pseudocode Yes Figure 2: The adaptive k-NN (AKNN) classifier.
Open Source Code No The paper does not provide a direct link or explicit statement that its own source code is openly available.
Open Datasets Yes We performed a few experiments using real-world data sets from computer vision and genomics (see Section C). [MNI96] MNIST dataset. http://yann.lecun.com/exdb/mnist/, 1996. [not11] not MNIST dataset. http://yaroslavb.com/upload/not MNIST/, 2011. Accessed: 2019-05-02. [Mou18] Mouse cell atlas dataset. ftp://ngs.sanger.ac.uk/production/teichmann/ BBKNN/Mouse Atlas.zip, 2018. Accessed: 2019-05-02.
Dataset Splits No The paper mentions using datasets for experiments but does not specify training, validation, or test splits with percentages or sample counts.
Hardware Specification No The paper does not specify any hardware used for running the experiments (e.g., GPU models, CPU types, or memory).
Software Dependencies No The paper does not provide specific software dependencies with version numbers used for the experiments.
Experiment Setup Yes Parametrization: We replace Equation (6) with = A p k, where A is a confidence parameter corresponding to the theory s δ (given n).