On the Strong Correlation Between Model Invariance and Generalization

Authors: Weijian Deng, Stephen Gould, Liang Zheng

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Second, using invariance scores computed by EI, we perform large-scale quantitative correlation studies between generalization and invariance, focusing on rotation and grayscale transformations.
Researcher Affiliation Academia Weijian Deng Stephen Gould Liang Zheng Australian National University {firstname.lastname}@anu.edu.au
Pseudocode No The paper defines EI with a formula (Eq. 1) and describes its computation in prose in Section 3, but does not present it as a structured pseudocode or algorithm block.
Open Source Code No The paper mentions using models provided by TIMM [78] and publicly released datasets, but does not state that the code for their proposed Effective Invariance (EI) measure or their correlation study methodology is open-source or provided with a link.
Open Datasets Yes We use both in-distribution (ID) and out-of-distribution (OOD) datasets for the correlation study. Specifically, the Image Net validation set (Image Net-Val) is used as ID test set. For OOD test sets, we use seven datasets... Image Net-V2 [23], Image Net-Adv(ersarial) [85], Image Net-S(ketch) [86], Image Net-Blur [87], Image Net-R(endition) [4]... We use the ID CIFAR-10 test set and two OOD test sets. 1) CIFAR-10.1 [94]... 2) CINIC-10 test set [96]
Dataset Splits Yes Specifically, the Image Net validation set (Image Net-Val) is used as ID test set.
Hardware Specification No We illustrate the computational resources in Supplementary material.
Software Dependencies No The paper mentions using models provided by TIMM [78] but does not specify version numbers for TIMM or other software dependencies.
Experiment Setup No The paper describes the setup for evaluating EI and the models/datasets used, but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, epochs) or specific training configurations beyond the choice of pre-trained models.