Consistent Non-Parametric Methods for Maximizing Robustness

Authors: Robi Bhattacharjee, Kamalika Chaudhuri

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Validation To complement our theoretical large sample results for non-parametric classifiers, we now include several experiments to understand their behavior for finite samples. and Results and Discussion: The results are presented in Figure 4; the left panel is for the exponential kernel, while the right one is for the polynomial kernel.
Researcher Affiliation Academia Robi Bhattacharjee University of California San Diego rcbhatta@eng.ucsd.edu Kamalika Chaudhuri University of California San Diego kamalika@eng.ucsd.edu
Pseudocode No The paper describes various non-parametric classifiers but does not present any pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] in the appendix
Open Datasets No To be able to measure performance with increasing data size, we look at a simple synthetic dataset over overlayed circles (see Figure 5 for an illustration) with support designed so that the data is intrinsically multiscaled. and Further details about our dataset are given in section D.
Dataset Splits No To compute the empirical astuteness of a kernel classifier WK about test point x, we perform a grid search over all points in V κ x to ensure that all points in the robustness region are labeled correctly. and We then generate a training set of size n, and use it to construct our classifiers. We generate a test set of size Ntest = 1000 to evaluate the astuteness. There is no mention of a validation set.
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Just a simple personal computer. This statement lacks specific hardware model numbers or detailed specifications.
Software Dependencies No The paper mentions using exponential and polynomial kernels but does not list any specific software or library versions used for implementation.
Experiment Setup Yes To be able to measure performance with increasing data size, we look at a simple synthetic dataset... For simplicity, we use a global label noise parameter of 0.2, meaning that any sample drawn from this distribution is labeled differently than its support with probability 0.2. and For each of these values, we plot the empirical astuteness as the training sample size n gets larger and larger. and For each classifier, we measure the empirical astuteness by using three trials of 20 test points and taking the average.