SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained Data
Authors: Shaoli Huang, Xinchao Wang, Dacheng Tao1628-1636
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we extensively evaluated the performance of Snap Mix on three fine-grained datasets. We evaluated our method using multiple network structures (Resnet18,34,50,101) as baselines. We compared the performance of our approach and related data augmentation methods on each network architecture. Further, we tested our method using a strong baseline that integrated mid-level features and compared the results with those of the current state-of-the-art methods of fine-grained recognition. |
| Researcher Affiliation | Academia | Shaoli Huang,1 Xinchao Wang, 2 Dacheng Tao 1 1 The University of Sydney 2 Stevens Institute of Technology shaoli.huang@sydney.edu.au, xinchao.wang@stevens.edu , dacheng.tao@sydney.edu.au |
| Pseudocode | No | The paper describes the method using text and mathematical equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at: https://github.com/Shaoli-Huang/Snap Mix.git. |
| Open Datasets | Yes | We conduct experiments on three standard fine-grained datasets, which are CUB-200-2011 (Wah et al. 2011), Stanford-Cars (Krause et al. 2013), and FGVC-Aircraft (Maji et al. 2013). |
| Dataset Splits | Yes | We conduct experiments on three standard fine-grained datasets, which are CUB-200-2011 (Wah et al. 2011), Stanford-Cars (Krause et al. 2013), and FGVC-Aircraft (Maji et al. 2013). For each dataset, We first resized images to 512 512 and cropped them with size 448 448. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions adapting implementation from "Torch Vision package" but does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | We used stochastic gradient descent (SGD) with momentum 0.9, base learning rate 0.001 for the pre-trained weights, and 0.01 for new parameters. We trained our model for 200 epochs and decayed the learning rate by factor 0.1 every 80 epochs. ... We set the probability of performing augmentation 0.5 for Cut Out and Mix Up and 1.0 for Cut Mix. We used the α values of 1.0 and 3.0 for Mix Up and Cut Mix, respectively. |