Out-of-Distribution Detection in Long-Tailed Recognition with Calibrated Outlier Class Learning
Authors: Wenjun Miao, Guansong Pang, Xiao Bai, Tianqi Li, Jin Zheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical results on three popular benchmarks CIFAR10-LT, CIFAR100LT, and Image Net-LT demonstrate that COCL substantially outperforms state-of-the-art OOD detection methods in LTR while being able to improve the classification accuracy on ID data. |
| Researcher Affiliation | Academia | Wenjun Miao1, Guansong Pang2*, Xiao Bai1, Tianqi Li1, Jin Zheng1, 4* 1School of Computer Science and Engineering, Beihang University 2School of Computing and Information Systems, Singapore Management University 3State Key Laboratory of Software Development Environment, Jiangxi Research Institute, Beihang University 4State Key Laboratory of Virtual Reality Technology and Systems, Beihang University |
| Pseudocode | No | The paper describes algorithms through text and mathematical equations but does not include structured pseudocode blocks. |
| Open Source Code | Yes | Code is available at https://github.com/mala-lab/COCL. |
| Open Datasets | Yes | We use three popular long-tailed image classification datasets as ID data, including CIFAR10-LT (Cao et al. 2019), CIFAR100-LT (Cao et al. 2019), and Image Net LT (Liu et al. 2019). |
| Dataset Splits | No | The paper uses CIFAR10-LT, CIFAR100-LT, and Image Net LT datasets for ID data and Tiny Images 80M/Image Net Extra for auxiliary OOD data, and mentions 'train' and 'test' sets, but does not provide specific details on validation splits (e.g., percentages, methodology, or sample counts). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing instance specifications used for running the experiments. |
| Software Dependencies | No | The paper discusses models and loss functions but does not specify software dependencies with version numbers (e.g., specific Python library versions, deep learning framework versions). |
| Experiment Setup | No | The paper mentions the backbone models (ResNet18, ResNet50) and the imbalance ratio (ρ = 100) but does not provide specific details on other experimental setup parameters such as learning rates, batch sizes, optimizers, or number of epochs. |