Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Revisiting Deep Feature Reconstruction for Logical and Structural Industrial Anomaly Detection

Authors: Sukanya Patra, Souhaib Ben Taieb

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluation across five benchmark datasets demonstrates the performance of ULSAD in detecting and localizing both structural and logical anomalies, outperforming eight state-of-the-art methods. An extensive ablation study further highlights the contribution of each component to the overall performance improvement.
Researcher Affiliation Academia Sukanya Patra EMAIL University of Mons Souhaib Ben Taieb EMAIL Mohamed bin Zayed University of Artificial Intelligence University of Mons
Pseudocode Yes Algorithm 1: Unified Logical and Structural AD (ULSAD)
Open Source Code Yes Our code is available at https://github.com/sukanyapatra1997/ULSAD-2024.git.
Open Datasets Yes [1] BTAD (Mishra et al., 2021). [2] MVTec AD (Bergmann et al., 2019). [3] MVTec-Loco (Bergmann et al., 2022). [4] MPDD (Jezek et al., 2021). [5] Vis A (Zou et al., 2022).
Dataset Splits Yes The training and validation sets contains only normal samples, i.e., y = 0. For the sake of simplicity, we refer to the training set as DN = {X | (X, 0) Dtrain}. The test set Dtest includes both normal and anomalous samples... [MVTec-Loco] It consists of 5 categories, with 1,772 normal images for training and 304 normal images for validation. It also contains 1568 images, either normal or anomalous, for evaluation.
Hardware Specification Yes For this analysis, we ran inference on the test samples in the MVTec LOCO dataset using an NVIDIA A100 GPU... Moreover, we used a single NVIDIA A4000 GPU for all the experiments unless mentioned otherwise.
Software Dependencies No ULSAD is implemented in Py Torch (Paszke et al., 2019). For the baselines, we follow the implementation in Anomalib (Akcay et al., 2022), a widely used AD library for benchmarking.
Experiment Setup Yes We train ULSAD over 200 epochs for each category using an Adam optimizer with a learning rate of 0.0002 and a weight decay of 0.00002. We set α = 0.9 and β = 0.995 unless specified otherwise.