Masked Contrastive Learning for Anomaly Detection
Authors: Hyunsoo Cho, Jinseok Seol, Sang-goo Lee
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our approaches on various image benchmark datasets, where we obtain significant performance gain over the previous state-of-the-art. |
| Researcher Affiliation | Academia | Hyunsoo Cho , Jinseok Seol and Sang-goo Lee Seoul National University {johyunsoo, jamie, sglee}@europa.snu.ac.kr |
| Pseudocode | No | The paper describes the mathematical formulations and components of MCL and SEI but does not include a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The source code for our model is available online.2 2https://github.com/HarveyCho/MCL |
| Open Datasets | Yes | We trained our model on CIFAR-10 [Krizhevsky et al., 2009] as IND, and used CIFAR-100, SVHN [Netzer et al., 2011], Image Net [Deng et al., 2009], and LSUN [Yu et al., 2015] datasets for OOD. |
| Dataset Splits | No | The paper mentions using CIFAR-10 for in-distribution (IND) and other datasets for out-of-distribution (OOD) testing, but it does not specify explicit train/validation/test splits (e.g., percentages or counts) for these datasets within the paper. It refers to "Test Acc." but does not detail how the training and validation splits were set up for the in-domain data. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU, GPU models, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions using "SGD optimizer" but does not list any specific software dependencies with version numbers (e.g., Python version, deep learning framework versions like PyTorch or TensorFlow). |
| Experiment Setup | Yes | Experiment configurations. In the following experiments, we adopt Res Net-34 [He et al., 2016] with a single projection head, following structure used to train CIFAR-10 in Sim CLR. We also fixed hyper-parameters related to contrastive learning following Sim CLR, which include transformation T = {color jittering, horizontal flip, grayscale, inception crop}, the strength of color distortion to 0.5, batch size to 1024, and temperature τ to 0.2, to keep our experiment tractable. For MCL hyper-parameters, we set α to 0.05, β to 2.5, and λ to 1 which meets certain condition for MCL (See Appendix A for details). Unlike Sim CLR, we used SGD optimizer with learning rate 1.2 (0.3 batch size / 256), decay 1e-6, and momentum 0.9. Furthermore, we use a cosine annealing scheduler without any warm-up. |