ATTA: Anomaly-aware Test-Time Adaptation for Out-of-Distribution Detection in Segmentation
Authors: Zhitong Gao, Shipeng Yan, Xuming He
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the efficacy of our method on several OOD segmentation benchmarks, including those with significant domain shifts and those without, observing consistent performance improvements across various baseline models. |
| Researcher Affiliation | Academia | Zhitong Gao1, Shipeng Yan1, Xuming He1,2 1School of Information Science and Technology, Shanghai Tech University 2Shanghai Engineering Research Center of Intelligent Vision and Imaging {gaozht,yanshp,hexm}@shanghaitech.edu.cn |
| Pseudocode | Yes | A Pseudo Code of ATTA The pseudo code for our proposed method, Anomaly-aware Test-Time Adaptation (ATTA), is presented in Algorithm 1. |
| Open Source Code | Yes | Code is available at https://github.com/gaozhitong/ATTA. |
| Open Datasets | Yes | We use the Cityscapes dataset [9] for training and perform OOD detection tests on several different test sets, all of which include novel classes beyond the original Cityscapes labels. |
| Dataset Splits | Yes | For both datasets, we first employ their public validation sets, which consist of 100 images for FS L&F and 30 images for FS Static. |
| Hardware Specification | Yes | Experiments are conducted on one NVIDIA TITAN Xp device, and results are averaged over all images in the FS Lost & Found validation set, with image size (1024 x 2048). |
| Software Dependencies | No | The paper mentions software like 'Deep Labv3+', 'Adam optimizer', and 'torchvision library' but does not specify their version numbers or other crucial software dependencies with version information required for reproducibility. |
| Experiment Setup | Yes | In our method, the confidence thresholds τ1 and τ2 are set to 0.3 and 0.6 respectively. Considering the standard practice in segmentation problem inferences, we anticipate the arrival of one image at a time (batch size = 1). We employ the Adam optimizer with a learning rate of 1e-4. For efficiency, we only conduct one iteration update for each image. The hyperparameters are selected via the FS Static -C dataset and are held constant across all other datasets. |