Adaptively Unified Semi-supervised Learning for Cross-Modal Retrieval

Authors: Liang Zhang, Bingpeng Ma, Jianfeng He, Guorong Li, Qingming Huang, Qi Tian

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on Wiki, Pascal and NUS-WIDE datasets show that the proposed method outperforms the state-of-the-art methods even when we set 20% samples without class labels.
Researcher Affiliation Academia 1 University of Chinese Academy of Sciences, Beijing, 100049, China 2 Key Lab of Intell. Info. Process., Inst. of Comput. Tech., CAS, Beijing, 100190, China 3 Key Laboratory of Big Data Mining and Knowledge Management, CAS, Beijing, 100190, China 4 Department of Computer Science, University of Texas at San Antonio, TX, 78249, USA
Pseudocode No The paper describes the optimization algorithm using mathematical equations, but it does not present it in a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets Yes Wiki dataset is collected from Wikipedia feature articles [Rasiwasia et al., 2010]. Pascal dataset consists of 5,011/4,952(training/testing) images-tag pairs [Everingham et al., 2010]. NUS-WIDE dataset consists of 40,834/27,159 (training/testing) image-tag pairs, which are pruned from the training-test split of the NUS dataset [Chua et al., 2009]
Dataset Splits Yes On this dataset, we randomly select 2,000 pairs of the data for training and 866 pairs for testing. parameters are set by 5-fold cross validation on the training set.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory) used to run its experiments.
Software Dependencies No The paper mentions models like word2vec, CNN (Caffe), and t-SNE, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes After cross validation, the parameters s and λ of AUSL are set to 2 and 0.1 in all the experiments. The dimension of the common subspace is set to 10, 20 and 10 for Wiki, Pascal and NUS-WIDE, respectively.