Sketching based Representations for Robust Image Classification with Provable Guarantees

Authors: Nishanth Dikkala, Sankeerth Rao Karingula, Raghu Meka, Jelani Nelson, Rina Panigrahy, Xin Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the smoothness of sketch vectors in the latent parameters used to generate the image in this section. As shown in Lemma 5.5, the infinite sketch is smooth as a function of the PTF parameters. Here we further demonstrate that finite sketch vectors also exhibit similar smoothness as we vary the latent parameters. ... In Table 1, we calculate the average absolute discrete derivatives of the projection with respect to the parameter... We also performed an initial set of experiments on using sketches for image classification. In this experiment, we sampled 10% of the train/test split of the fashion MNIST dataset, and obtained LSH sketches of the images. We then train a 2 layer neural network (with a hidden layer of size 128 and Re LU activation) to classify the sketches. We found a 100% train accuray and 79.3% test accuracy for this task.
Researcher Affiliation Collaboration Nishanth Dikkala Google Research nishanthd@google.com Sankeerth Rao Karingula Google Research sankeerth1729@gmail.com Raghu Meka UC Los Angeles raghuvardhan@gmail.com Jelani Nelson UC Berkeley minilek@gmail.com Rina Panigrahy Google Research rinap@google.com Xin Wang Google Research wanxin@google.com
Pseudocode Yes Algorithm 1 Recursive Sketching Subroutine. Image or the region in a bucket may optionally be centred before sketching ... Algorithm 2 Recursive merging into LSH table of sketches
Open Source Code No The paper does not contain any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets Yes In this experiment, we sampled 10% of the train/test split of the fashion MNIST dataset, and obtained LSH sketches of the images.
Dataset Splits No The paper mentions a 'train/test split' but does not explicitly specify a validation split or its size/methodology.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory).
Software Dependencies No The paper mentions training a '2 layer neural network', but does not specify any software dependencies with version numbers (e.g., specific deep learning frameworks like PyTorch or TensorFlow, along with their versions).
Experiment Setup Yes We then train a 2 layer neural network (with a hidden layer of size 128 and Re LU activation) to classify the sketches.