New Interpretations of Normalization Methods in Deep Learning

Authors: Jiacheng Sun, Xiangyong Cao, Hanwen Liang, Weiran Huang, Zewei Chen, Zhenguo Li5875-5882

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, a series of experiments are conducted to verify these claims.In this section, we conduct a series of experiments to verify the claims of normalization methods induced by our proposed analysis tools.
Researcher Affiliation Collaboration 1Huawei Noah s Ark Lab, 2Xi an Jiaotong University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes The experiments are conducted on CIFAR-10 or CIFAR-100 dataset where images are normalized to zero mean and unit variance.
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits. It mentions "CIFAR-10 or CIFAR-100 dataset" and "training samples" but no percentages or specific counts for splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Specifically, in this experiment, we train Res Net-101 model on CIFAR-10 using the SGD algorithm with learning rate 10 3 and epoch number 200.