Multi-attention Meta Learning for Few-shot Fine-grained Image Recognition
Authors: Yaohui Zhu, Chenlong Liu, Shuqiang Jiang
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally analyze the different components of our method, and experimental results on four benchmark datasets demonstrate the effectiveness and superiority of our method. |
| Researcher Affiliation | Academia | Yaohui Zhu1,2 , Chenlong Liu1,2 , Shuqiang Jiang1,2 1Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China 2University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper describes the proposed method using text and mathematical equations, but it does not include any formally labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions implementing "five compared methods (i.e., Matching Net, Prototypical Net, Relation Net, MAML, ada CNN) with the corresponding public code," but it does not state that the code for their proposed Matt ML method is open source or provide a link to it. |
| Open Datasets | Yes | Our experiments are conducted on four fine-grained benchmark datasets (i.e., CUB Birds [Wah et al., 2011], Stanford Dogs [Khosla et al., 2011], Stanford Cars [Krause et al., 2013], FGVC Aircraft [Maji et al., 2013]). |
| Dataset Splits | Yes | Sampling from training, validation and test data, respectively, the training, validation and test tasks have the same forms but with disjoint label space. The detailed splits of training, validation and test categories and the number of categories/images in each dataset are presented in Table 1. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using "Adam optimizer [Kingma and Ba, 2015]" but does not specify software versions for other key components like programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | We apply standard data augmentation, which includes random crop, left-right flip, and color jitter at the meta-training stage in all implemented experiments. The batch size of task is set to 4, and each task has the same settings with the above test. We use Adam optimizer [Kingma and Ba, 2015] with initial learning rate 0.001. The total iterations are 80,000 and the learning rate is changed to 1/2 after each 20,000 iterations. |