ResAD: A Simple Framework for Class Generalizable Anomaly Detection
Authors: Xincheng Yao, Zixin Chen, Chao Gao, Guangtao Zhai, Chongyang Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments on four real-world industrial AD datasets, including MVTec AD [5], Vis A [51], BTAD [25], and MVTec3D [7]. |
| Researcher Affiliation | Collaboration | Xincheng Yao1, Zixin Chen1, Chao Gao3, Guangtao Zhai1, Chongyang Zhang1,2 1School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University 2Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 3China Pacific Insurance (Group) Co., Ltd. {i-Dover, CZX15724137864, zhaiguangtao, sunny_zhang}@sjtu.edu.cn1 gaochao-027@cpic.com.cn3 |
| Pseudocode | No | The paper describes the Res AD framework components and processes in detail, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/xcyao00/Res AD. |
| Open Datasets | Yes | We conduct comprehensive experiments on four real-world industrial AD datasets, including MVTec AD [5], Vis A [51], BTAD [25], and MVTec3D [7]. [...] As for our method s generalizability to other domains, we further evaluate our method on a medical image dataset, Bra TS [24] (for brain tumor segmentation) and a video AD dataset, Shanghai Tech [23]. |
| Dataset Splits | No | The paper discusses training and testing datasets ('train AD methods', 'evaluated on the test set') and general training parameters ('total training epochs are set as 100'), but it does not explicitly define a separate validation dataset or its split for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | We run all the experiments with a single NVIDIA RTX 4090 GPU and random seed 42. |
| Software Dependencies | No | The paper mentions using the 'Pytorch library' and the 'Adam' optimizer, but it does not provide specific version numbers for these or other software dependencies like Python or CUDA. |
| Experiment Setup | Yes | All the training and test images are resized and cropped to 224 224 resolution. ... The layer numbers of the NF model are set as 8. We use the Adam [28] optimizer with weight decay 5e 4 to train the model. The total training epochs are set as 100, and the batch size is 32. The learning rate is 1e 5 initially and dropped by 0.1 after [70, 90] epochs. |