Meta-Reinforced Synthetic Data for One-Shot Fine-Grained Visual Recognition
Authors: Satoshi Tsutsui, Yanwei Fu, David Crandall
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The model is trained in an end-to-end manner, and our experiments demonstrate consistent improvement over baselines on one-shot fine-grained image classification benchmarks. |
| Researcher Affiliation | Academia | Satoshi Tsutsui Indiana University USA stsutsui@indiana.edu Yanwei Fu Fudan University China yanweifu@fudan.edu.cn David Crandall Indiana University USA djcran@indiana.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Further implementation details are available as supplemental source code.2 http://vision.soic.indiana.edu/metairnet/ |
| Open Datasets | Yes | We use the fine-grained classification dataset of Caltech UCSD Birds (CUB) [32] for our main experiments, and another fine-grained dataset of North American Birds (NAB) [30] for secondary experiments. |
| Dataset Splits | Yes | For CUB, we use the same train/val/test split used in previous work [4], and for NAB we randomly split with a proportion of train:val:test = 2:1:1; see supplementary material for details. |
| Hardware Specification | No | The paper mentions 'an NVidia Titan Xp GPU' for a specific generation step during a pilot study (Section 3), but does not specify the hardware used for the main Meta IRNet experiments described in Section 5.1. |
| Software Dependencies | No | The paper mentions using 'Adam' optimizer and 'Res Net18' for image classification, but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We set λp = 0.1 and λz = 0.1, and perform 500 gradient descent updates with the Adam [18] optimizer with learning rate 0.01 for z and 0.0005 for the fully connected layers, to produce scale and shift parameters of the batch normalization layers. We train F and C with Adam with a default learning rate of 0.001. |