White Noise Analysis of Neural Networks

Authors: Ali Borji, Sikun Lin

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments over four classic datasets, MNIST, Fashion-MNIST, CIFAR-10, and Image Net, show that the computed bias maps resemble the target classes and when used for classification lead to an over two-fold performance than the chance level.
Researcher Affiliation Collaboration Ali Borji & Sikun Lin University of California, Santa Barbara, CA aliborji@gmail.com, sikun@ucsb.edu Work done during internship at Markable AI.
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/aliborji/White Noise Analysis.git.
Open Datasets Yes Over four datasets, MNIST (Le Cun et al., 1998), Fashion-MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky et al., 2009), and Image Net Deng et al. (2009), we employ classification images to discover implicit biases of a network...
Dataset Splits Yes We conducted an experiment on Image Net validation set including 50K images covering 1000 categories and 1 million samples using Gabor PCA sampling...
Hardware Specification No The paper mentions running experiments on 'a single GPU' but does not specify the model or manufacturer (e.g., NVIDIA A100, RTX 2080 Ti), making the hardware specification too vague.
Software Dependencies No The paper does not mention specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes We trained a CNN with 2 conv layers, 2 pooling layers, and one fully connected layer (see supplement Fig. 10) on the MNIST dataset. ... We changed the activation functions in the convolution layers of the CIFAR-10 CNN model to tanh, as using Re LU activation resulted in some dead filters. ... here we use λl = 0.01, 0.1, 1 for fc, conv1, and conv2, respectively.