Neural Anisotropy Directions

Authors: Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi, Pascal Frossard

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our hypothesis on common CNNs used for image classification with a 32 32 single-channel input. We use the two-dimensional discrete Fourier basis (2D-DFT) which offers a good representation of the features in standard vision datasets [26 28] to generate the selected vectors2. The difference in performance on these experiments underlines the strong bias of these networks towards certain frequency directions (see Fig. 1).
Researcher Affiliation Academia Guillermo Ortiz-Jiménez EPFL, Lausanne, Switzerland guillermo.ortizjimenez@epfl.ch Apostolos Modas EPFL, Lausanne, Switzerland apostolos.modas@epfl.ch Seyed-Mohsen Moosavi-Dezfooli ETH Zürich, Zurich, Switzerland seyed.moosavi@inf.ethz.ch Pascal Frossard EPFL, Lausanne, Switzerland pascal.frossard@epfl.ch
Pseudocode No The paper does not contain any structured pseudocode or explicitly labeled algorithm blocks.
Open Source Code Yes The code to reproduce our experiments can be found at https://github.com/LTS4/neural-anisotropy-directions.
Open Datasets Yes Furthermore, we show that, for the CIFAR-10 dataset, NADs characterize the features used by CNNs to discriminate between different classes.
Dataset Splits No The paper mentions using 'training samples' and 'CIFAR-10 training set' and 'test set' but does not provide specific numerical split percentages or exact sample counts for each split (train/validation/test) in the main text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or library versions needed to replicate the experiment.
Experiment Setup No The paper states: 'In general, all training and evaluation setups, hyperparameters, number of training samples, and network performances are listed in the Supp. material.', indicating that specific experimental setup details are not provided in the main text.