A Comprehensive Augmentation Framework for Anomaly Detection
Authors: Jiang Lin, Yaping Yan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The evaluations conducted on the MVTec anomaly detection dataset demonstrate that our method outperforms the previous state-of-the-art approach, particularly in terms of object classes. We also generate a simulated dataset comprising anomalies with diverse characteristics, and experimental results demonstrate that our approach exhibits promising potential for generalizing effectively to various unseen anomalies encountered in real-world scenarios. |
| Researcher Affiliation | Academia | Jiang Lin,Yaping Yan* School of Computer Science and Engineering, Southeast University, Nanjing 210096, China Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China {220215663, yan}@seu.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions 'Acknowledgments We thank anomalib (Akcay et al. 2022) for the code support.' which refers to a third-party library, not their own code release. There is no explicit statement or link provided for their own code. |
| Open Datasets | Yes | Dataset Experiments in this paper are conducted on the MVTec (Bergmann et al. 2019) anomaly detection dataset. The MVTec dataset contains 15 classes including 5 classes of textures and 10 classes of objects. This dataset provides a training set with only normal images and a test set comprised of various anomalies. |
| Dataset Splits | Yes | The paper specifies training, validation, and test sets implicitly by stating: 'The MVTec dataset contains 15 classes including 5 classes of textures and 10 classes of objects. This dataset provides a training set with only normal images and a test set comprised of various anomalies. It provides pixel-level annotations which allow benchmarks for anomaly localization.' and later 'Specifically, we split the training data in half, and use different samples in reconstruction and localization, thus preparing the localization process for the reconstruction quality drop in practice.' |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper thanks 'anomalib (Akcay et al. 2022) for the code support' in the acknowledgments, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Experimental settings All the images are resized to a size of 256 256 before entering the network. The training settings and the model choices mostly follow the previous work (Zavrtanik, Kristan, and Skoˇcaj 2021) to make a fair comparison, as this paper mainly focuses on improving performance through more comprehensive training data. We randomly split the training data in half and used them for training each network separately. The data collection process could store similar samples in near positions, so it is worth noting that the data is separated in parity order instead of upper and lower halves. Also, there is no indiscriminate use of image rotation (on anomaly-free images as a data augmentation method, not to simulate anomalies) to alleviate the overfitting issue. |