Few-shot Visual Learning with Contextual Memory and Fine-grained Calibration
Authors: Yuqing Ma, Wei Liu, Shihao Bai, Qingyu Zhang, Aishan Liu, Weimin Chen, Xianglong Liu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments on multiple benchmark datasets demonstrate the superiority of IPN, compared to a number of state-of-the-art approaches. 3 Experiments In this section, we evaluate our IPN with state-of-the-art fewshot approaches on widely used datasets. |
| Researcher Affiliation | Collaboration | 1State Key Lab of Software Development Environment, Beihang University, China 2Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, China 3Net Ease Fuxi AI Lab, Hangzhou, China |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | We employ the widely used datasets in prior studies, including mini Image Net dataset [Vinyals et al., 2016] and tiered Image Net dataset [Ren et al., 2018]. |
| Dataset Splits | Yes | The classes of tiered Image Net are grouped into 34 higher-level nodes based on Word Net hierarchy [Deng et al., 2009], and is further partitioned into disjoint sets of training, testing, and validation nodes, ensuring a distinct distance between training and testing classes thus making the classification more challenging. We use the validation set to select the training episodes with the best accuracy. |
| Hardware Specification | No | The paper mentions running experiments but does not provide any specific details about the hardware used (e.g., GPU model, CPU type, memory). |
| Software Dependencies | No | The paper mentions using Adam optimizer but does not specify any software versions for libraries, frameworks, or programming languages (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Standard data augmentations including random crop, leftright flip, and color jitter are applied in the training stage. The mini-batch size for all experiments is 20. The number of training iterations on mini Image Net and tiered Image Net are 100K and 200K. We use Adam optimizer with an initial learning rate of 0.001, and reduce the learning rate by half every 15K and 30K iterations, respectively on mini Image Net and tiered Image Net. The weight decay is set to 10-6. When conducting fine-grained calibration at local stage, the prediction reliability threshold τ0 is set to 1.5, and the number of nearest neighbors L is set to 3. |