Unsupervised Continual Anomaly Detection with Contrastively-Learned Prompt
Authors: Jiaqi Liu, Kai Wu, Qiang Nie, Ying Chen, Bin-Bin Gao, Yong Liu, Jinbao Wang, Chengjie Wang, Feng Zheng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments and set the benchmark on unsupervised continual anomaly detection and segmentation, demonstrating that our method is significantly better than anomaly detection methods, even with rehearsal training. The code will be available at https://github. com/shirowalker/UCAD. |
| Researcher Affiliation | Collaboration | Jiaqi Liu1*, Kai Wu2 , Qiang Nie2, Ying Chen2, Bin-Bin Gao2, Yong Liu2, Jinbao Wang1, Chengjie Wang2,3, Feng Zheng1 1Southern University of Science and Technology 2Tencent Youtu Lab 3Shanghai Jiao Tong University |
| Pseudocode | No | The paper describes its methods in text and uses diagrams, but it does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code will be available at https://github. com/shirowalker/UCAD. |
| Open Datasets | Yes | MVTec AD (Bergmann et al. 2019) is the most widely used dataset for industrial image anomaly detection. Vis A (Zou et al. 2022) is now the largest dataset for realworld industrial anomaly detection with pixel-level annotations. |
| Dataset Splits | No | The paper describes training and testing sets, but it does not specify explicit validation dataset splits (e.g., percentages, counts, or specific files) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as CPU/GPU models, memory, or cloud computing instance types. It only mentions the use of a "vit-base-patch16-224 backbone pretrained on Image Net 21K". |
| Software Dependencies | No | The paper mentions using the Adam optimizer, but it does not provide specific version numbers for software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | During prompt training, we employed a batch size of 8 and adapt Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.0005 and momentum of 0.9. The training process spanned 25 epochs. |