On the Spectral Bias of Neural Networks

Authors: Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, Aaron Courville

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We find empirical evidence of a spectral bias: i.e. lower frequencies are learned first. We also show that lower frequencies are more robust to random perturbations of the network parameters (Section 3). We now present experiments showing that networks tend to fit lower frequencies first during training.
Researcher Affiliation Academia 1Mila, Quebec, Canada 2Image Analysis and Learning Lab, Ruprecht-Karls-Universit at Heidelberg, Germany.
Pseudocode No The paper presents theoretical derivations and experimental setups but does not include any clearly labeled 'Algorithm' or 'Pseudocode' blocks.
Open Source Code Yes 1Code: https://github.com/nasimrahaman/SpectralBias
Open Datasets Yes To tackle this, we propose the following set of experiments to measure the effect of spectral bias indirectly on MNIST.
Dataset Splits No The paper mentions 'validation loss curves' and states 'the validation set is obtained by evaluating 0 on a separate subset of the data', but does not provide specific details on the split percentages, sample counts, or the methodology for creating this subset for reproducibility.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper mentions 'Pytorch s BCEWith Logits Loss' but does not specify version numbers for any software dependencies, libraries, or frameworks used in the experiments.
Experiment Setup Yes A 6-layer deep 256-unit wide Re LU network f is trained to regress λ... with N = 200 input samples... training progresses with full-batch gradient descent.