Object Detection Meets Knowledge Graphs

Authors: Yuan Fang, Kingsley Kuan, Jie Lin, Cheston Tan, Vijay Chandrasekhar

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental empirical evaluation on two benchmark datasets show that our approach can significantly increase recall by up to 6.3 points without compromising mean average precision, when compared to the state-of-the-art baseline.
Researcher Affiliation Academia Yuan Fang, Kingsley Kuan, Jie Lin, Cheston Tan and Vijay Chandrasekhar Institute for Infocomm Research, A*STAR, Singapore yfang@i2r.a-star.edu.sg, kingsley.kuan@gmail.com, {lin-j,cheston-tan,vijay}@i2r.a-star.edu.sg
Pseudocode No The paper describes mathematical formulations and optimization steps (Eq. 4-7) but does not include a formally structured pseudocode or algorithm block.
Open Source Code No The paper mentions using a 'public Python Caffe implementation' with a GitHub link, but this refers to a third-party baseline framework, not the authors' own source code for their proposed methodology.
Open Datasets Yes We use benchmark data MSCOCO15 [Lin et al., 2014] and PASCAL07 [Everingham et al., 2010], summarized in Table 1.
Dataset Splits Yes For MSCOCO15, we combine their training and validation sets for training the baseline, except for a subset of 5000 images named minival . We further split minival into 1000 and 4000 images, named minival-1k and minival-4k respectively. We use minival-1k to choose hyperparameter for our approach, and minival-4k for offline testing. ... For PASCAL07, we use their training set for training the baseline, validation set for choosing our hyperparameter, and test set for evaluation.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory specifications).
Software Dependencies No The paper mentions 'Python Caffe implementation' but does not specify version numbers for Caffe, Python, or any other critical libraries, which are necessary for full reproducibility.
Experiment Setup Yes Models are trained using stochastic gradient descent with a momentum of 0.9, a minibatch size of 2 and a weight decay of 5e-4. ... We use a learning rate of 1e-3 for the first 350K/50K iterations on MSCOCO15/PASCAL07, followed by 1e-4 for another 140K/10K iterations. ... On the validation data, we choose the hyperparameter ϵ in Eq. (4) from {0.1, 0.25, 0.5, 0.75, 0.9}. ... We set the random walk restarting probability α = 0.15, a typical value known to be stable [Fang et al., 2013].