HCVRD: A Benchmark for Large-Scale Human-Centered Visual Relationship Detection
Authors: Bohan Zhuang, Qi Wu, Chunhua Shen, Ian Reid, Anton van den Hengel
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose a webly-supervised approach to these problems and demonstrate that the proposed model provides a strong baseline on our HCVRD dataset. and Table 2: Evaluation of different methods on the proposed dataset. |
| Researcher Affiliation | Academia | Bohan Zhuang, Qi Wu, Chunhua Shen, Ian Reid, Anton van den Hengel Australian Centre for Robotic Vision, The University of Adelaide, Australia |
| Pseudocode | No | The paper describes the model architecture and components in text and diagrams but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'Our dataset comprises two parts, publicly available separately or together from Hiddenforblindreview.' This refers to the dataset availability, not the source code for the proposed methodology. No explicit statement about making the code open source was found. |
| Open Datasets | Yes | Our dataset comprises two parts, publicly available separately or together from Hiddenforblindreview. The main part comprises a carefully curated set harvested from the large Visual Genome dataset (Krishna et al. 2017). |
| Dataset Splits | No | We use 31,586 images for training and construct two test splits. The first test split contains 10,000 images where all the relationships occur in the training set. Another test split includes all the zeroshot relationships, i.e. relationships in this split are never occurred in the training split. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'VGG-16', 'Faster-RCNN', 'lexical analysis toolkit', and 'GloVe', but it does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | We set the feature embedding size in the metric learning module as 256. For training efficiency, we initialize the feature extraction module with the pre-trained VGG-16. We then pretrained the detection module and fix it while training the metric learning module. The learning rate is initialized to 0.0001 and decreased by a factor of 10 after every 5 epochs. |