Online Learning to Rank for Content-Based Image Retrieval

Authors: Ji Wan, Pengcheng Wu, Steven C. H. Hoi, Peilin Zhao, Xingyu Gao, Dayong Wang, Yongdong Zhang, Jintao Li

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct an extensive set of experiments, in which encouraging results show that our technique is effective, scalable and promising for large-scale CBIR.
Researcher Affiliation Academia Ji Wan1,2,3, Pengcheng Wu2, Steven C. H. Hoi2, Peilin Zhao4, Xingyu Gao1,2,3, Dayong Wang5, Yongdong Zhang1, Jintao Li1 1 Key Laboratory of Intelligent Information Processing of CAS, ICT, CAS, China 2 Singapore Management University 3 University of the Chinese Academy of Sciences 4 Institute for Infocomm Research, A*STAR, Singapore 5 Michigan State University, MI, USA
Pseudocode No The paper describes algorithms (OPR, OPAR, OGDR) using mathematical equations and textual descriptions but does not include structured pseudocode or algorithm blocks.
Open Source Code No No explicit statement or link was provided indicating that the authors have made their source code for the methodology described in this paper publicly available.
Open Datasets Yes Table 1 shows a list of image databases in our testbed. ... Holiday, Caltech101, Image CLEF, Corel, Image CLEFFlickr
Dataset Splits Yes For each database, we randomly split it into five folds, in which one fold is used for test, one is for validation, and the rest are for training. and For validation and test data sets, we randomly choose 300 validation images and 150 test images from each fold.
Hardware Specification No The paper discusses 'CPU time cost' and running times but does not provide specific hardware details such as CPU/GPU models, memory, or processor types used for the experiments.
Software Dependencies No The paper mentions various algorithms and machine learning techniques but does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, specific library versions).
Experiment Setup No The paper mentions that parameters were chosen via cross-validation ('To conduct a fair evaluation, we choose the parameters of different algorithms via the same cross validation scheme in all the experiments.') but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations.