GAIA: Delving into Gradient-based Attribution Abnormality for Out-of-distribution Detection
Authors: Jinggang Chen, Junjie Li, Xiaoyang Qu, Jianzong Wang, Jiguang Wan, Jing Xiao
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of GAIA is validated on both commonly utilized (CIFAR) and large-scale (Image Net-1K) benchmarks. |
| Researcher Affiliation | Collaboration | Huazhong University of Science and Technology, China Ping An Technology (Shenzhen) Co., Ltd. {chen.jinggang98, 2216217669ljj, quxiaoy}@gmail.com, jzwang@188.com, jgwan@hust.edu.cn, xiaojing661@pingan.com.cn |
| Pseudocode | Yes | Algorithm 1: GAIA |
| Open Source Code | Yes | Code is available at https://github.com/JGEthanChen/GAIA-OOD. |
| Open Datasets | Yes | The effectiveness of GAIA is validated on both commonly utilized (CIFAR) and large-scale (Image Net-1K) benchmarks. |
| Dataset Splits | No | The paper mentions using pre-trained models and various ID/OOD datasets for evaluation but does not specify the train/validation/test splits of these datasets used for training or validating their own models or experimental setup within the paper's text. |
| Hardware Specification | No | The paper mentions using ResNet34 and Wide Resnet40 models, but it does not specify the hardware (e.g., specific CPU, GPU models, memory, or computing cluster details) used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers (e.g., Python version, PyTorch version, specific library versions) that would be needed for reproducibility. |
| Experiment Setup | No | The paper states models are 'pre-trained with cross-entropy loss' and evaluates on benchmarks, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer details) or detailed training configurations used for their experiments. |