One-Shot Learning for Long-Tail Visual Relation Detection
Authors: Weitao Wang, Meng Wang, Sen Wang, Guodong Long, Lina Yao, Guilin Qi, Yang Chen12225-12232
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two newly-constructed datasets show that our model significantly improved the performance of two tasks Pred Cls and SGCls from 2.8% to 12.2% compared with state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2MOE Key Laboratory of Computer Network and Information Integration, China 3The University of Queensland, Brisbane, Australia 4The University of Technology Sydney, Sydney, Australia 5The University of New South Wales, Sydney, Australia |
| Pseudocode | No | The paper describes the model architecture and method using text and mathematical equations, but it does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | Yes | Our codes and datasets are available at https://github.com/Witt-Wang/oneshot. |
| Open Datasets | Yes | We constructed two new datasets for the task of one-shot visual relationship detection and compared our model with state-of-the-art approaches. Our codes and datasets are available at https://github.com/Witt-Wang/oneshot. |
| Dataset Splits | No | Following the standard one-shot learning settings (Vinyals et al. 2016; Ravi and Larochelle 2017), we split the visual relation dataset D into two sets Ttrain and Ttest based on different predicates... During the episodic learning, a small subset of visual relations contains N predicates will be sampled from Ttrain to construct a support set S and a query set Q. The paper does not specify a distinct validation set in addition to train and test splits for hyperparameter tuning. |
| Hardware Specification | No | All our code was implemented in PyTorch (Paszke et al. 2017) and ran with 4 GPUs." This statement does not specify the model or type of GPUs, CPU, or any other hardware component. |
| Software Dependencies | No | All our code was implemented in PyTorch (Paszke et al. 2017)." This mentions the software but does not specify its version number. |
| Experiment Setup | Yes | Our model was trained by Adam optimizer with an initial learning rate of 1 10 3 and weight decay of 10 6. The batch sizes for training were set to be 10 and 5 for 5way and 10-way experiments, respectively. |