Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Authors: Tianyang Hu, Jun WANG, Wenjia Wang, Zhenguo Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive empirical studies on practical neural networks, demonstrating the effectiveness of square loss in both synthetic low-dimensional data and real image data. Comparing to cross-entropy, square loss has comparable generalization error but noticeable advantages in robustness and model calibration. Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/sl_classification. |
| Researcher Affiliation | Collaboration | Tianyang Hu Huawei Noah s Ark Lab hutianyang1@huawei.com HKUST jwangfx@connect.ust.hk Wenjia Wang HKUST (GZ) and HKUST wenjiawang@ust.hk Zhenguo Li Huawei Noah s Ark Lab li.zhenguo@huawei.com |
| Pseudocode | No | The paper describes methods and mathematical formulations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code will be available at https://gitee.com/mindspore/models/tree/master/research/cv/sl_classification. |
| Open Datasets | Yes | We adopt popular architectures, Res Net [50] and Wide Res Net [51] and evaluate them on the CIFAR image classification datasets [52], with only the training loss function changed, from cross-entropy (CE) to square loss with simplex coding (SL). |
| Dataset Splits | No | The paper mentions using CIFAR datasets and standard adversarial training, but it does not explicitly provide specific details on the training, validation, and test splits (e.g., percentages, sample counts, or explicit cross-validation setup) beyond implying standard benchmarks. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper mentions 'Mind Spore' as a framework and 'Res Net'/'Wide Res Net' as architectures, but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | all the parameters are kept as default except for the learning rate (lr) and batch size (bs), where we are choosing from the better of (lr=0.01, bs=32) and (lr=0.1, bs=128). |