Energy-based Automated Model Evaluation

Authors: Ru Peng, Heming Zou, Haobo Wang, Yawen Zeng, Zenan Huang, Junbo Zhao

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide extensive experiments across modalities, datasets and different architectural backbones to validate MDE s validity, together with its superiority compared with prior approaches.
Researcher Affiliation Collaboration Ru Peng1 Heming Zou1 Haobo Wang1 Yawen Zeng2 Zenan Huang1 Junbo Zhao1 1Zhejiang University 2Byte Dance {rupeng,zouheming,wanghaobo,lccurious,j.zhao}@zju.edu.cn yawenzeng11@gmail.com
Pseudocode Yes Algorithm 1 Automated Model Evaluation via Meta-distribution Energy
Open Source Code Yes Code and data are available: https://github.com/pengr/Energy_Auto_Eval
Open Datasets Yes In this work, we evaluate each method on the image classification tasks CIFAR-10, CIFAR100 (Krizhevsky et al., 2009), Tiny Image Net (Le & Yang, 2015), Image Net-1K (Deng et al., 2009), WILDS (Koh et al., 2021a) and the text inference task MNLI (Williams et al., 2018).
Dataset Splits Yes Table 5: Details of the datasets considered in our work. Train (Source) Valid (Source) Evaluation (Target)... (ii)-Synthetic Shift. We use CIFAR-10-C benchmark (Hendrycks & Dietterich, 2019)...applied to the CIFAR-10 validation set.
Hardware Specification No The paper mentions training models and experiments, but does not provide specific details on the hardware used, such as exact GPU or CPU models, or cloud computing instance types.
Software Dependencies No The paper mentions software like 'pytorch-cifar-models', 'timm library', and 'Hugging Face library' but does not specify their version numbers, which are required for reproducibility.
Experiment Setup No The paper states 'Following the practice in Deng et al. (2023), we train models...' and 'we use the same training settings as (Yu et al., 2022)', deferring specific experimental setup details and hyperparameters to external sources rather than providing them explicitly within the text.