EAT: Towards Long-Tailed Out-of-Distribution Detection
Authors: Tong Wei, Bo-Lin Wang, Min-Ling Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we demonstrate that our method outperforms the current state-of-the-art on various benchmark datasets. |
| Researcher Affiliation | Academia | Tong Wei*, Bo-Lin Wang, Min-Ling Zhang School of Computer Science and Engineering, Southeast University, Nanjing 210096, China Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China {weit, wangbl, zhangml}@seu.edu.cn |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https://github.com/Stomachache/Long-Tailed-OOD-Detection. |
| Open Datasets | Yes | CIFAR10-LT, CIFAR100-LT (Cao et al. 2019), and Image Net-LT (Liu et al. 2019) are used as in-distribution training sets (Din). |
| Dataset Splits | No | The paper mentions training and test sets but does not explicitly specify the training/validation/test splits (e.g., percentages or counts for each) or refer to standard predefined validation splits for the training data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions support from the "Big Data Computing Center of Southeast University" without specifications. |
| Software Dependencies | No | The paper mentions optimizers (Adam, SGD) and network architectures (Res Net18, Res Net50) but does not provide specific version numbers for any software dependencies or libraries (e.g., PyTorch version, Python version, scikit-learn version). |
| Experiment Setup | Yes | For experiments on CIFAR10-LT and CIFAR100-LT, we train the model for 180 epochs using Adam (Kingma and Ba 2014) optimizer with initial learning rate 1 × 10−3 and batch size 128. We decay the learning rate to 0 using a cosine annealing learning rate scheduler (Loshchilov and Hutter 2016). For fine-tuning, we fine-tune the classifier and BN layers for 10 epochs using Adam optimizer with an initial learning rate 5 × 10−4. For experiments on Image Net-LT, we follow the settings in (Wang et al. 2021) and use Res Net50 (He et al. 2016). We train the main branch for 60 epochs using SGD optimizer with an initial learning rate of 0.1 and batch size of 64. We fine-tune the classifier and BN layers for the 1 epoch using SGD optimizer with an initial learning rate of 0.01. In all experiments, we set λ = 0.05, and the weights for generated tail class samples are set to 0.05 for EAT. For the number of abstention classes, we set k = 3 on CIFAR10-LT, k = 30 on CIFAR100-LT and Image Net-LT. |