The Spectral Bias of Polynomial Neural Networks

Authors: Moulik Choraria, Leello Tadesse Dadi, Grigorios Chrysos, Julien Mairal, Volkan Cevher

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the theoretical bias through extensive experiments. and 4 NUMERICAL EVIDENCE The analysis in Section 3 reveals that polynomial networks in the NTK regime learn higher frequency information faster. In practice however, neural networks deviate from the near-initialization NTK conditions within just a few iterations of gradient descent. Therefore, to verify the analysis on the spectral bias of Π-Nets, we conduct a series of experiments that increasingly deviate from the NTK regime, including image-based datasets to further verify our theoretical analysis.
Researcher Affiliation Academia Moulik Choraria University of Illinois at Urbana-Champaign moulikc2@illinois.edu Leello Dadi EPFL, Switzerland leello.dadi@epfl.ch Grigorios G Chrysos EPFL, Switzerland grigorios.chrysos@epfl.ch Julien Mairal Univ. Grenoble-Alpes, Inria julien.mairal@inria.fr Volkan Cevher EPFL, Switzerland volkan.cevher@epfl.ch
Pseudocode No The paper provides mathematical derivations and schematic illustrations (e.g., Figure 5, Figure 6), but does not include any explicitly labeled “Pseudocode” or “Algorithm” blocks.
Open Source Code No The paper does not contain any statement about making its code open-source, providing a repository link, or including code in supplementary materials.
Open Datasets No The paper mentions using “MNIST images” and “learning spherical harmonics,” but does not provide concrete access information such as URLs, DOIs, specific repository names, or formal citations with author and year for these datasets to indicate their public availability for replication.
Dataset Splits No The paper mentions “validation loss curves” and implies the use of a validation set, but it does not provide specific details on the dataset splits (e.g., percentages or sample counts for training, validation, and test sets) or refer to citations for predefined splits for its experiments.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., libraries, frameworks, or programming language versions) used for the experiments.
Experiment Setup Yes For all experiments, we use Π-Nets based on the product of polynomials formulation (Appendix), in the same vein as [16]. ... with a fixed learning rate (same for both networks)... N = 200 evenly spaced input samples... For the input, we sample a random tensor z P RNˆHˆW (N = 32 in our setup)... We train both networks for 2500 iterations, with the same learning rate... We train for 5000 iterations with identical learning rates.