Few-Shot Fast-Adaptive Anomaly Detection

Authors: Ze Wang, Yipin Zhou, Rui Wang, Tsung-Yu Lin, Ashish Shah, Ser Nam Lim

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We support our findings with strong empirical evidence. In this section, we conduct evaluation on the industrial inspection task with the MVTec-AD dataset [5, 6] (Section 4.1). Even though our proposed framework is image-based, we further demonstrate it s efficacy on the video anomaly detection task in Section 4.2. In Section 4.3, we show ablations and insights relating to the adaptive sparse coding components.
Researcher Affiliation Collaboration Ze Wang , Yipin Zhou , Rui Wang , Tsung-Yu Lin , Ashish Shah , and Ser-Nam Lim Purdue University Meta AI
Pseudocode Yes Algorithm 1 Training procedure. Algorithm 2 Inference procedure on a task indexed by i.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No]
Open Datasets Yes We conduct evaluation on the industrial inspection task with the MVTec-AD dataset [5, 6] (Section 4.1). We follow the same evaluation regime as r-GAN by training with normal samples in all 13 scenes from SH-Tech [28] and testing on UCSD Pedestrian 1, UCSD Pedestrian 2 [34], and CUHK Avenue [30].
Dataset Splits Yes Specifically, the model is adapted to a support set of the given task, then a query set with ground truth labels is applied to evaluate the adaptation, which is used to update the model parameters. As shown in Fig 2, the support set of the i-th episode task contains a small number of K normal samples {si}k=1. The features zi}k; ) of these normal samples are plugged into the dictionary Di ∈ Rd×Kh0w0 corresponding to the i-th task to adapt the dictionary. After that, the adapted model is measured by a query set consisting of M normal samples {qi}m=1 and M abnormal samples {ˆqi}m=1.
Hardware Specification Yes We train with 50000 episodes on 8 NVIDIA V100 GPUs.
Software Dependencies No The paper does not explicitly list specific software dependencies with their version numbers (e.g., Python version, deep learning framework versions like PyTorch or TensorFlow, or library versions) required to reproduce the experiments.
Experiment Setup Yes We use Adam optimizer [23] with a learning rate of 1e-4 and a batch size of 64. For the MVTec dataset, we set K=10, Q=10, β = 0.5, I = 50000. For the video anomaly detection, we set K=5, Q=10, β = 0.5, I = 20000.