Investigating and Explaining the Frequency Bias in Image Classification

Authors: Zhiyu Lin, Yifei Gao, Jitao Sang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report results with Res Net-50 [He et al., 2016] trained on CIFAR10 dataset 4. To investigate the feature discrimination for different frequency components, for LFC and HFC, we respectively select frequency components with r = {4, 8, 12, 16} (defined as Eq.(2)). For each frequency component, the inter-class variance and test accuracy are calculated and shown in Fig.3. The first observation is that feature discrimination capability exhibits significant bias among different frequency components.
Researcher Affiliation Collaboration Zhiyu Lin1 , Yifei Gao1 , Jitao Sang1,2 1Beijing Jiaotong University, China 2Peng Cheng Lab, Shenzhen 518066, China {zyllin, yf-gao, jtsang}@bjtu.edu.cn
Pseudocode Yes Algorithm 1 Visualizing the learning priority Input: X: test image, N: training epochs Parameter: The model parameters of each epoch Output: The gradient spectrum 1: for epoch 0 to N do 2: L Compute the loss 3: G Loss backward and obtain the gradient map 4: S AI(G) Compute the spectral density of gradient 5: end for 6: return spectral density of gradients S
Open Source Code Yes Our code is available at https://github.com/zhiyugege/FreqBias
Open Datasets Yes Fig.1 illustrates the Kernel Density Estimation (KDE) curves of the 10 image classes in CIFAR-10 [Krizhevsky et al., 2009] for low, middleand high-frequency components.
Dataset Splits No The paper mentions splitting data into train and test sets but does not provide specific percentages, sample counts, or detailed methodology for training, validation, and test splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x, CUDA 11.x), which are needed to replicate the experiment.
Experiment Setup Yes The settings of the rest paper is to run the experiment with 150 epochs with SGD Momentum optimizer with learning rate set to be 1e-2 and batch size set to be 100. All experiments are repeated three times and averaged.