Random Erasing Data Augmentation

Authors: Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang13001-13008

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). ... yields consistent improvement over strong baselines in image classification, object detection and person re-identification. For image classification, we evaluate on four image classification datasets, including two well-known datasets, CIFAR10 and CIFAR-100 (Krizhevsky and Hinton 2009), a new dataset Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and a large-scale dataset Image Net2012 (Deng et al. 2009).
Researcher Affiliation Academia 1Department of Artificial Intelligence, Xiamen University 2Re LER, University of Technology Sydney 3Research School of Computer Science, Australian National University 4School of Computer Science, Carnegie Mellon University
Pseudocode Yes Algorithm 1: Random Erasing Procedure
Open Source Code Yes Code is available at: https://github.com/zhunzhong07/Random-Erasing.
Open Datasets Yes For image classification, we evaluate on four image classification datasets, including two well-known datasets, CIFAR10 and CIFAR-100 (Krizhevsky and Hinton 2009), a new dataset Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and a large-scale dataset Image Net2012 (Deng et al. 2009). For object detection, we use the PASCAL VOC 2007 (Everingham et al. 2010) dataset... For person re-identification (re-ID), the Market-1501 dataset (Zheng et al. 2015) contains... Duke MTMC-re ID (Zheng, Zheng, and Yang 2017; Ristani et al. 2016) includes... For CUHK03 (Li et al. 2014)...
Dataset Splits Yes CIFAR-10 and CIFAR-100 contain 50,000 training and 10,000 testing 32 32 color images drawn from 10 and 100 classes, respectively. Fashion-MNIST consists of 60,000 training and 10,000 testing 28x28 gray-scale images. ... Image Net2012 consists of 1,000 classes, including 1.28 million training images and 50k validation images.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions software components like Res Net, Fast-RCNN, VGG16, SGD, Softmax loss, triplet loss, but does not provide specific version numbers for any libraries, frameworks, or other software dependencies.
Experiment Setup Yes We set p = 0.5, sl = 0.02, sh = 0.4, and r1 = 1 r2 = 0.3. ...the learning rate starts from 0.1 and is divided by 10 after the 150th and 225th epoch. We stop training by the 300th epoch. ... We apply SGD for 80K to train all models. The training rate starts with 0.001 and decreases to 0.0001 after 60K iterations. ... The input images are resized to 256 128. We use the Res Net-18, Res Net-34, and Res Net-50 architectures for IDE and Tri Net, and Res Net-50 for SVDNet. ... For Random Erasing, we set p = 0.5, sl = 0.02, sh = 0.2, and r1 = 1 r2 = 0.3.