Approximate Feature Collisions in Neural Nets

Authors: Ke Li, Tianhao Zhang, Jitendra Malik

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we will apply our method to two standard neural net architectures trained on the MNIST and Image Net datasets. The trained model achieves a test accuracy of 96.64%.
Researcher Affiliation Academia Ke Li UC Berkeley ke.li@eecs.berkeley.edu Tianhao Zhang Nanjing University bryanzhang@smail.nju.edu.cn Jitendra Malik UC Berkeley malik@eecs.berkeley.edu
Pseudocode No The paper describes mathematical formulations and optimization problems but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'https://github.com/aymericdamien/TensorFlow-Examples' in a footnote, but this appears to be a general repository for TensorFlow examples and not the specific code implemented for the methodology described in this paper.
Open Datasets Yes First we train a simple fully-connected neural network with two hidden layers... on the MNIST dataset. We now perform the same experiment on Image Net.
Dataset Splits No The paper mentions training on MNIST and Image Net datasets and reports test accuracy, but it does not provide specific details regarding train/validation/test dataset splits (e.g., percentages, sample counts, or splitting methodology).
Hardware Specification No The paper does not explicitly describe the hardware specifications (e.g., specific GPU/CPU models, memory, or cloud instances) used for running its experiments.
Software Dependencies No The paper mentions 'TensorFlow' in a footnote but does not provide specific version numbers for TensorFlow or any other software libraries or dependencies used.
Experiment Setup No The paper describes neural network architectures used (e.g., 'fully-connected neural network with two hidden layers' for MNIST, 'pre-trained VGG-16 net' for Image Net) but does not provide specific experimental setup details such as hyperparameters (learning rate, batch size, epochs, optimizer settings) or training configurations.