Probabilistic Attributed Hashing

Authors: Mingdong Ou, Peng Cui, Jun Wang, Fei Wang, Wenwu Zhu

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and comparison study are conducted on two public datasets, i.e., DBLP and NUS-WIDE. The results clearly demonstrate that the proposed PAH method substantially outperforms the peer methods.
Researcher Affiliation Collaboration Mingdong Ou1, Peng Cui1, Jun Wang2, Fei Wang3, Wenwu Zhu1 1Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology, Tsinghua University. Beijing, China 2Data Science, Alibaba Group, Seattle, WA, USA. 3Department of Computer Science and Engineering, University of Connecticut. Storrs, CT. USA.
Pseudocode Yes Figure 1: The graphical model of probabilistic attributed hashing. Algorithm 1 presents the corresponding generative process. Algorithm 1 Probabilistic Attributed Hashing
Open Source Code No The paper mentions that "the codes of baselines are provided by their authors" but does not provide any link or statement about the availability of their own source code for PAH.
Open Datasets Yes We perform the experiments and comparison using two popular benchmark datasets, i.e., the DBLP and the NUS-WIDE dataset (Chua et al. 2009)...1http://www.informatik.uni-trier.de/ ley/db/
Dataset Splits No The paper mentions training and test sets but does not specify a separate validation split or explicit methodology for validation.
Hardware Specification Yes Finally, all the algorithms are implemented using Matlab (the codes of baselines are provided by their authors), and run experiments on a machine running Windows Server 2008 with 12 2.4GHz cores and 192GB memory.
Software Dependencies No The paper mentions "Matlab" but does not specify a version number or other key software dependencies with specific versions.
Experiment Setup Yes For setting the hyper-parameters in PAH, we use grid search to get the optimal hyper-parameters, and get α = {0.1}2 1, β = {0.01}La 1, γ = {0}Lf 1. Note that the parameters σ, σ0 weigh the contributions from the feature data. Hence we get σ = σ0 = 1.0 for the DBLP dataset, and σ = σ0 = 10 5 for the NUS-WIDE dataset