Learning Semantic Context from Normal Samples for Unsupervised Anomaly Detection
Authors: Xudong Yan, Huaidong Zhang, Xuemiao Xu, Xiaowei Hu, Pheng-Ann Heng3110-3118
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we perform various experiments on three public benchmark datasets and a new dataset Lace AD collected by us, and show that our method clearly outperforms the current state-of-the-art methods. |
| Researcher Affiliation | Academia | Xudong Yan,1 Huaidong Zhang,1 Xuemiao Xu,1,2,3,4 Xiaowei Hu,5 Pheng-Ann Heng5,6 1South China University of Technology 2Ministry of Education Key Laboratory of Big Data and Intelligent Robot 3Guangdong Provincial Key Lab of Computational Intelligence and Cyberspace Information 4State Key Laboratory of Subtropical Building Science 5The Chinese University of Hong Kong 6Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | Our code were implemented based on Py Torch 1.10 framework with Python 3.6; The paper does not provide a link to open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate the proposed framework on four datasets: our newly collected Lace AD dataset, MVTec AD (Bergmann et al. 2019), CIFAR-10 (Krizhevsky and Hinton 2009), and MINST (Le Cun 1998). |
| Dataset Splits | No | Note that the training set contains only normal samples, and the testing set has normal and abnormal samples. For MVTec AD and Lace AD datasets, we resize both training and testing images to the size of 512 × 512, and each batch of training contains four images; For CIFAR and MINST datasets, we resize the image to the size of 32 × 32, and set the batch size to 512. |
| Hardware Specification | Yes | We trained and tested the network on a single NVIDIA RTX2080Ti with 64GB RAM on the Ubuntu16.04 system. |
| Software Dependencies | Yes | Our code were implemented based on Py Torch 1.10 framework with Python 3.6; |
| Experiment Setup | Yes | For the training settings, the network was optimized by an Adam optimizer (Kingma and Ba 2015) with β1 = 0, β2 = 0.9, and learning rate as 0.0001. The network stopped the training after 200 epochs. The weights in the loss function were set as λrm = 3, λrec = 1 and λadv = 0.001. |