Novelty Detection via Contrastive Learning with Negative Data Augmentation
Authors: Chengwei Chen, Yuan Xie, Shaohui Lin, Ruizhi Qiao, Jian Zhou, Xin Tan, Yi Zhang, Lizhuang Ma
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our model has significant superiority over cutting-edge novelty detectors and achieves new state-of-the-art results on various novelty detection benchmarks, e.g. CIFAR10 and DCASE. |
| Researcher Affiliation | Collaboration | Chengwei Chen1 , Yuan Xie1 , Shaohui Lin1 , Ruizhi Qiao2 , Jian Zhou2 , Xin Tan3 , Yi Zhang4 and Lizhuang Ma1 1East China Normal University 2Tencent Youtu Lab 3Shanghai Jiao Tong University 4Zhejiang Lab |
| Pseudocode | No | The paper describes the system architecture and training process in text and diagrams but does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We select CIFAR-10 [Krizhevsky and Hinton, 2009], COIL 100 [Nene et al., 1996], MNIST [Lecun and Bottou, 1998], f MNIST [Xiao et al., 2017] and DCASE [Mesaros et al., 2017] as the standard evaluation dataset. |
| Dataset Splits | Yes | The 80% of in-class samples are regarded as a normal class for training, while the rest of 20% of in-class samples is adopt for testing. (for COIL100, MNIST, f MNIST) and contains 491, 496 and 500 audio files of roughly 30 seconds in the training, validation and test dataset, respectively. (for DCASE) |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | We use Py Torch [Paszke et al., 2019] to implement our method. (No version number provided for PyTorch or any other software dependencies.) |
| Experiment Setup | Yes | For training parameters, the learning rate and the number of total epochs are set to 0.002 and 100, respectively. SGD optimizer with momentum is adopted to optimize the parameters of our framework. Batch size, momentum and weight decay are set to 128, 0.9 and 0.005, respectively. For hyperparameters, β and γ are set to 0.5 and 0.1, respectively. λ1, λ2 and λ3 are all set to 1. |