Semi-transductive Learning for Generalized Zero-Shot Sketch-Based Image Retrieval

Authors: Ce Ge, Jingyu Wang, Qi Qi, Haifeng Sun, Tong Xu, Jianxin Liao

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on two large-scale benchmarks and four evaluation metrics. The results show that our method is superior over the state-of-the-art competitors in the challenging GZS-SBIR task.
Researcher Affiliation Collaboration State Key Laboratory of Networking and Switching Technology Beijing University of Posts and Telecommunications Beijing 100876, China {nwlgc, wangjingyu, qiqi8266, hfsun}@bupt.edu.cn, xutong@ebupt.com, jxlbupt@gmail.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes We employ two widely used SBIR datasets: Sketchy (Sangkloy et al. 2016) and TU-Berlin (Eitz, Hays, and Alexa 2012).
Dataset Splits No The paper mentions splitting datasets into seen and unseen classes and describes the generalized test set, but it does not specify the training/validation splits or percentages for the training phase, nor explicit details about a validation set beyond implying its use for early stopping.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments, such as CPU/GPU models or memory specifications.
Software Dependencies Yes The whole model is implemented on top of Py Torch (Paszke et al. 2019)
Experiment Setup Yes The feature dimension of the embedding space is set to 1024D. The weighting factors for each dataset are determined by grid search with ω1 [0.01, 1] and ω2 [0.001, 10]. For Sketchy, ω1 = 0.5, ω2 = 0.1, and for TU-Berlin ω1 = 0.5, ω2 = 0.5. The margin hyperparameters in Lrank (Eq. 3) and Ltrans (Eq. 11) are empirically set to = 0.1 and δ = 0.01, respectively. The whole model is implemented on top of Py Torch (Paszke et al. 2019) and is trained end-to-end by stochastic gradient descent with learning rate 1e-3 and a mini-batch size 20. The early stopping strategy is adopted to combat overfitting.