A Computable Definition of the Spectral Bias

Authors: Jonas Kiessling, Filip Thor7168-7175

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We devise a set of numerical experiments that confirm that low frequencies are learned first, a behavior quantified by our definition.
Researcher Affiliation Collaboration Jonas Kiessling, Filip Thor * KTH Royal Institute of Technology, Stockholm, Sweden H-AI AB, Stockholm, Sweden jonas.kiessling@h-ai.se, filip.thor@it.uu.se
Pseudocode No The paper describes methods mathematically and textually but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code that reproduces the experiments can be found in the accompanying code appendix.
Open Datasets Yes The image used in this experiment comes from the DIV2K data set (Agustsson and Timofte 2017) used in the NTIRE 2017 challenge on the SISR problem (Timofte et al. 2017).
Dataset Splits Yes We draw 212 i.i.d. points from N(0, 1) to use as training data, and another 212 points used as validation data and for estimating the spectral bias with Method 2.
Hardware Specification Yes The experiments are performed on a Windows 10 Home desktop with an Intel i7-10700K CPU @ 3.8 GHz, 48 GB of memory, and an Nvidia Ge Force RTX 2070 GPU.
Software Dependencies Yes The numerical experiments are done in Python 3.8.6, and all neural networks used in this section are implemented in Tensorflow 2.5.0...Method 1 uses the FFT from the Num Py 1.19.5 library...Kernel Density function from the Scikit Learn library.
Experiment Setup Yes The NN has 5 layers with 64 nodes in each, trained with the Adam optimizer (Kingma and Ba 2015), a batch size of 32, and learning rate of 0.0005. The weights are initialized with He-initialization (He et al. 2015).