Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing

Authors: Vishaal Krishnan, Abed AlRahman Al Makdah, Fabio Pasqualetti

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present the results from numerical experiments on the standard MNIST dataset of handwritten digits [45], for the training schemes in Sections 2 and 3. We first obtain n = 5000 graph vertices using the K-means algorithm on the images in the MNIST dataset. We then construct a graph G = (V, E) by connecting each vertex to its 5 nearest neighbors, and compute the solution v to (9). We associate each testing data sample with the closest vertex, evaluate the classification confidence of v , and assign to it the class that corresponds to the largest confidence. Fig. 3(a)-(c) show the dependence of testing accuracy, testing confidence, and testing loss on the Lipschitz bound α.
Researcher Affiliation Academia Vishaal Krishnan Mechanical Engineering Department University of California Riverside vishaalk@ucr.edu Abed Al Rahman Al Makdah Electrical & Computer Engineering Department University of California Riverside aalmakdah@engr.ucr.edu Fabio Pasqualetti Mechanical Engineering Department University of California Riverside fabiopas@engr.ucr.edu
Pseudocode No The paper describes algorithms (e.g., primal-dual dynamics for the Lagrangian LG(v, Λ) with time-step sequence {h(k)}k N) but does not present them in a structured pseudocode or algorithm block format.
Open Source Code Yes The code from numerical experiments in this paper is available on Git Hub: https://github.com/abedmakdah/Lipschitz-Bounds-and-Provably-Robust-Training-by-Laplacian-Smoothing.git
Open Datasets Yes [45] Y. Le Cun, C. Cortes, and C. J. C. Burges. The MNIST database of handwritten digits. URL: http://yann.lecun.com/exdb/mnist, 1998.
Dataset Splits No The paper uses the MNIST dataset but does not explicitly state training, validation, and test splits with percentages or counts. It refers to 'a testing set of 2000 i.i.d. samples' in one example and 'standard MNIST dataset' for another, implying standard splits might be used but not specifying them.
Hardware Specification No The paper does not specify the hardware used (e.g., CPU, GPU models, or cloud computing instances) for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No The paper describes varying parameters like 'different values of the Lipschitz constant α' and 'different values of p and ε' but does not list specific hyperparameter values or a detailed experimental setup configuration.