Frustratingly Simple Few-Shot Object Detection
Authors: Xin Wang, Thomas Huang, Joseph Gonzalez, Trevor Darrell, Fisher Yu
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive comparisons with previous methods on the existing few-shot object detection benchmarks using PASCAL VOC and COCO, where our approach can obtain about 2 20 points improvement in all settings (Section 4.1). We then introduce a new benchmark on three datasets (PASCAL VOC, COCO and LVIS) with revised evaluation protocols to address the unreliability of previous benchmarks (Section 4.2). We also provide various ablation studies and visualizations in Section 4.3. |
| Researcher Affiliation | Academia | 1EECS, UC Berkeley 2EECS, University of Michigan. |
| Pseudocode | No | The paper describes algorithms and training schemes with textual explanations and diagrams (Figure 1, Figure 2) but does not include any formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | The code as well as the pretrained models are available at https://github.com/ucbdrive/ few-shot-object-detection. |
| Open Datasets | Yes | We build new benchmarks on three datasets: PASCAL VOC, COCO and LVIS (Gupta et al., 2019). |
| Dataset Splits | No | The paper mentions using "the same data splits and training examples provided by Kang et al. (2019)" for some datasets and defines how classes are grouped for LVIS, but it does not explicitly provide the specific percentages or counts for train/validation/test splits within its own text. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) that would be needed to reproduce the experiment environment. |
| Experiment Setup | Yes | All models are trained using SGD with a minibatch size of 16, momentum of 0.9, and weight decay of 0.0001. A learning rate of 0.02 is used during base training and 0.001 during few-shot fine-tuning. |