Learning Geographical Hierarchy Features for Social Image Location Prediction

Authors: Xiaoming Zhang, Xia Hu, Zhoujun Li

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the superiority of our model for image location prediction. In this section, we conduct experiments to assess the effectiveness of GH-BDBN.
Researcher Affiliation Academia State Key Laboratory of Software Development Environment, Beihang University, China Department of Computer Science and Engineering, Texas A&M University, USA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the public availability of its source code.
Open Datasets Yes We use the dataset Media Eval2012 [Rae and Kelm, 2012] which is a community-driven benchmark and is run by the Media Eval organizing committee to evaluate our model.
Dataset Splits No The paper states "We split the dataset and use 80% for training and 20% for testing." but does not explicitly mention a validation split.
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models or memory) used for running its experiments.
Software Dependencies No The paper mentions software like Word2Vec and GMM algorithm, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For BDBN, there are three levels for the vision-specific DBN, and the numbers of units are 1296, 1000, and 800 respectively. Since text is closer to the learned latent feature level, we use two levels to model the text-specific DBN with the numbers of units being 2000 and 1000 respectively. For hu and hv, we set the numbers of units to be 1500 and 1000 respectively.