ReContrast: Domain-Specific Anomaly Detection via Contrastive Reconstruction
Authors: Jia Guo, Shuai Lu, Lize Jia, Weihang Zhang, Huiqi Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate our transfer ability on various image domains, we conduct extensive experiments across two popular industrial defect detection benchmarks and three medical image UAD tasks, which shows our superiority over current state-of-the-art methods. |
| Researcher Affiliation | Academia | Jia Guo1 Shuai Lu2 Lize Jia2 Weihang Zhang2 Huiqi Li1,2 1School of Information and Electronics, Beijing Institute of Technology, China 2School of Medical Technoloy, Beijing Institute of Technology, China |
| Pseudocode | No | The paper describes its method using textual descriptions and diagrams (e.g., Figure 1, 2, 3) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https://github.com/guojiajeremy/Re Contrast |
| Open Datasets | Yes | MVTec AD is the most widely used industrial defect detection dataset, containing 15 categories of sub-datasets (5 textures and 10 objects). Vis A is a challenging industrial defect detection dataset, containing 12 categories of sub-datasets. OCT2017 is an optical coherence tomography dataset [21]. APTOS is a color fundus image dataset, available as the training set of the 2019 APTOS blindness detection challenge [22]. ISIC2018 is a skin disease dataset, available as task 3 of ISIC2018 challenge [23]. |
| Dataset Splits | No | The paper provides specific training and testing set splits for various datasets, for example: 'The training set contains 26,315 normal images and the test set contains 1,000 images' (OCT2017). However, it does not explicitly provide details about a separate validation set split used for hyperparameter tuning or early stopping. It implicitly indicates its absence in the Limitations section: 'Because of the absence of validation sets in UAD settings, whether the last epoch (for reporting results) is in the middle of a loss spike and performance dip is related to random seeds.' |
| Hardware Specification | Yes | Experiments are run on NVIDIA Ge Force RTX3090 GPUs (24GB). |
| Software Dependencies | Yes | Codes are implemented with Python 3.8 and Py Torch 1.12.0 cuda 11.3. |
| Experiment Setup | Yes | Adam W optimizer [31] is utilized with β=(0.9,0.999) and weight decay=1e-5. The learning rates of new (decoder and bottleneck) and pre-trained (encoder) parameters are 2e-3 and 1e-5, respectively. The network is trained for 3,000 iterations on Vis A, 2,000 on MVTec AD and ISIC2018, and 1,000 on APTOS and OCT2017. The α in equation (5) linearly rises from -3 to 1 in the first one-tenth iterations and keeps 1 for the rest training. The batch size is 16 for industrial datasets and 32 for medical datasets. |