Large-Scale Supervised Multimodal Hashing with Semantic Correlation Maximization
Authors: Dongqing Zhang, Wu-Jun Li
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world datasets show that SCM can significantly outperform the state-of-the-art SMH methods, in terms of both accuracy and scalability. |
| Researcher Affiliation | Academia | Dongqing Zhang and Wu-Jun Li Shanghai Key Laboratory of Scalable Computing and Systems Department of Computer Science and Engineering, Shanghai Jiao Tong University, China National Key Laboratory for Novel Software Technology Department of Computer Science and Technology, Nanjing University, China |
| Pseudocode | Yes | Algorithm 1 Sequential Learning Algorithm for SCM. |
| Open Source Code | No | The paper states, "The source codes of all the other methods are kindly provided by the authors." referring to baseline methods, but does not provide any statement or link regarding the open-sourcing of the code for their proposed SCM method. |
| Open Datasets | Yes | NUS-WIDE (Chua et al. 2009) is a public image dataset... The Wiki dataset (Rasiwasia et al. 2010) is crawled from the Wikipedia s featured articles. |
| Dataset Splits | No | The paper specifies training and query (test) set splits (e.g., "99% of the data as the training set... and the remaining 1% to form the query set") but does not explicitly mention a separate validation dataset split. |
| Hardware Specification | Yes | All our experiments are conducted on a workstation with Intel(R) Xeon(R) CPU X7560@2.27GHz and 64 GB RAM. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers). |
| Experiment Setup | Yes | To investigate the scalability of different methods, we evaluate the training time of different methods on NUS-WIDE dataset by varying the size of training set from 500 to 20,000. The code length is fixed to 16 in this experiment. |