RecursiveMix: Mixed Learning with History

Authors: Lingfeng Yang, Xiang Li, Borui Zhao, Renjie Song, Jian Yang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on Res Net-50, RM largely improves classification accuracy by 3.2% on CIFAR-100 and 2.8% on Image Net with negligible extra computation/storage costs. In the downstream object detection task, the RM-pretrained model outperforms the baseline by 2.1 AP points and surpasses Cut Mix by 1.4 AP points under the ATSS detector on COCO. In semantic segmentation, RM also surpasses the baseline and Cut Mix by 1.9 and 1.1 m Io U points under Uper Net on ADE20K, respectively. Codes and pretrained models are available at https://github.com/implus/Recursive Mix.
Researcher Affiliation Collaboration Lingfeng Yang1#, Xiang Li2,1#, Borui Zhao3, Renjie Song3, Jian Yang1 1Nanjing University of Science and Technology, 2Nankai University, 3Megvii Technology
Pseudocode No The paper describes the method using equations and textual descriptions but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Codes and pretrained models are available at https://github.com/implus/Recursive Mix.
Open Datasets Yes The two CIFAR datasets [37] consist of colored natural scene images, each with 32 32 pixels in total. The train and test sets have 50K images and 10K images respectively. The Image Net 2012 dataset [14] contains 1.28 million training images and 50K validation images from 1K classes. We conduct experiments using the one-stage object detector ATSS [71], GFL [42, 41, 40], and two-stage detector Mask R-CNN [25], HTC [6] in COCO [43] dataset. Next, we experiment on ADE20K [75] using two popular algorithms, i.e., PSPNet [72] and Uper Net [67].
Dataset Splits Yes The train and test sets have 50K images and 10K images respectively. (CIFAR) The Image Net 2012 dataset [14] contains 1.28 million training images and 50K validation images from 1K classes.
Hardware Specification Yes Table 11: Comparisons of the training efficiency by hours, evaluated on 8 TITAN Xp GPUs.
Software Dependencies No The paper mentions using optimizers like SGD and Adam W but does not specify version numbers for any software libraries, programming languages, or other dependencies.
Experiment Setup Yes For 200-epoch training, we employ SGD with a momentum of 0.9, a weight decay of 5 10 4, and 2 GPUs with a mini-batch size of 64 on each to optimize the models. The learning rate is set to 0.1 with a linear warmup [26] for five epochs and a cosine decay schedule [46]. The hyperparameters for RM on Image Net are set to α=0.5, ω=0.5. All networks are trained using SGD with a momentum of 0.9, a weight decay of 1 10 4, and 8 GPUs with a mini-batch size of 64 on each. The initial learning rate is 0.2 with a linear warmup [26] for five epochs and is then decayed following a cosine schedule [46]. To optimize Transformer backbone networks, we use Adam W [47] with a learning rate of 5 10 4, a momentum of 0.9, a weight decay of 5 10 2, and 8 GPUs with a mini-batch size of 64 on each.