Rotation Has Two Sides: Evaluating Data Augmentation for Deep One-class Classification
Authors: Guodong Wang, Yunhong Wang, Xiuguo Bao, Di Huang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct evaluations on popular OCC benchmarks, including CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (20 superclasses) (Krizhevsky et al., 2009), and Cat-vs-Dog (Elson et al., 2007). Our ablation study considers variations in pre-training datasets (CIFAR-10 vs. Image Net-1K), pretraining paradigms (self-supervised vs. supervised) and architecture types (Vi T-B/16 (Dosovitskiy et al., 2021) vs. Res Net-18) in the first stage. |
| Researcher Affiliation | Collaboration | Guodong Wang1,2, Yunhong Wang2, Xiuguo Bao3, Di Huang1,2 1State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 2School of Computer Science and Engineering, Beihang University, Beijing, China 3Natl. Comp. Net. Emer. Resp. Tech. Team/Coord. Ctr. of China, Beijing, China {wanggd,yhwang,dhuang}@buaa.edu.cn, {baoxiuguo}@139.com |
| Pseudocode | No | The paper provides a diagram (Figure 2) illustrating the framework's overview but does not include any explicit pseudocode blocks or algorithms. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | We conduct evaluations on popular OCC benchmarks, including CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (20 superclasses) (Krizhevsky et al., 2009), and Cat-vs-Dog (Elson et al., 2007). We evaluate these transformations on the Street View House Numbers (SVHN) dataset (Netzer et al., 2011) We additionally conduct experiments on the Tiny-Image Net Le & Yang (2015) Our experimental evaluation on the Mv Tec-AD dataset (Bergmann et al., 2019). |
| Dataset Splits | No | The paper discusses training and testing, and mentions using 'standard train/validation split' in the context of other methods, but does not explicitly provide the specific training, validation, and test dataset splits (e.g., percentages or counts) used for their own experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., 'Python 3.x', 'PyTorch 1.x'). |
| Experiment Setup | Yes | We train the learnable linear layer hγ for 50 epochs and use stochastic gradient descent (SGD) with momentum as the optimizer and set the learning rate to 0.1. Our batch size is set to 256, and the temperature τ in the Gumbel softmax is fixed at 1.0. |