Distributionally robust weighted k-nearest neighbors

Authors: Shixiang Zhu, Liyan Xie, Minghe Zhang, Rui Gao, Yao Xie

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the competitive performance of our algorithm compared to the state-of-the-art in the few-training-sample setting with various real-data experiments.In this section, we evaluate our method and eight alternative approaches on four commonly-used image data sets: MNIST [28], CIFAR-10 [25], Omniglot [27], and present a set of comprehensive numerical examples.
Researcher Affiliation Academia Shixiang Zhu Carnegie Mellon University shixianz@andrew.cmu.edu; Liyan Xie The Chinese University of Hong Kong, Shenzhen xieliyan@cuhk.edu.cn; Minghe Zhang Georgia Institute of Technology minghe_zhang@gatech.edu; Rui Gao University of Texas at Austin rui.gao@mccombs.utexas.edu; Yao Xie Georgia Institute of Technology yao.xie@isye.gatech.edu
Pseudocode Yes The algorithm is summarized in Algorithm 1 (Appendix A).Appendix A contains 'Algorithm 1 Dr.k-NN'.
Open Source Code No The paper's ethics review states '[Yes]' for including code, data, and instructions, but it does not provide a direct URL to a source-code repository or an explicit statement within the main text that code is provided in supplementary material with specific access details.
Open Datasets Yes We evaluate our method and eight alternative approaches on four commonly-used image data sets: MNIST [28], CIFAR-10 [25], Omniglot [27]... We also test our method on two medical diagnosis data sets: Lung Cancer [12], and COVID-19 CT [44]
Dataset Splits No The paper states it uses M-class K-sample training tasks and tests with 1,000 unseen samples. It mentions hyper-parameter tuning by cross-validation but does not specify exact training/validation/test splits by percentages, sample counts, or specific cross-validation folds like '5-fold cross-validation'.
Hardware Specification Yes All experiments are performed on Google Colaboratory (Pro version) with 12GB RAM and dual-core Intel processors, which speed up to 2.3 GHz (without GPU).
Software Dependencies No The paper mentions using the 'Adam optimizer' and a 'differentiable convex optimization layer' from a cited work [2], but it does not provide specific version numbers for these or other key software components like programming languages or libraries.
Experiment Setup Yes The Adam optimizer [23] is adopted for all experiments conducted in this paper, where learning rate is 10 2. The mini-batch size is 32... We use the Euclidean distance c( , 0) = k 0k2 throughout our experiment... we use the same network structure in matching network, prototypical network, and Meta Opt Net as we described above... single CNN layer... where the kernel size is 3, the stride is 1 and the width of the output layer is d = 400.