Towards Affordable Semantic Searching: Zero-Shot Retrieval via Dominant Attributes
Authors: Yang Long, Li Liu, Yuming Shen, Ling Shao
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On two benchmarks, CUB and SUN, extensive experiments demonstrate that our method can achieve promising performance for the problem. Moreover, our approach can also benefit conventional ZSL tasks. ... Our method is evaluated on two benchmarks for zero-shot recognition, SUN (Patterson et al. 2014) and CUB (Wah et al. 2011). ... Table 1: Main ZSR results at different ranks. |
| Researcher Affiliation | Collaboration | Yang Long,1,3 Li Liu, ,2 Yuming Shen,4 Ling Shao1,2,4, 1 School of Automation, Northwestern Polytechnical University, Xi an, P. R. China 2 JD Artificial Intelligence Research (JDAIR), Beijing, P. R. China 3 Open Lab, School of Computing, Newcastle University, Newcastle upon Tyne, UK 4 School of Computing Sciences, University of East Anglia, Norwich, UK |
| Pseudocode | Yes | Algorithm 1: LIAD |
| Open Source Code | No | The paper does not contain any statement about releasing its own source code, nor does it provide a link to a code repository. It mentions using "released codes" for other methods but not its own. |
| Open Datasets | Yes | Our method is evaluated on two benchmarks for zero-shot recognition, SUN (Patterson et al. 2014) and CUB (Wah et al. 2011). |
| Dataset Splits | Yes | We cross-validate all hyper-parameters of our LIAD on the training set. Since we do not use any attributes during the training phase, we propose a 5-fold approximated cross-validation scheme for the ZSR problem. |
| Hardware Specification | No | The paper does not specify any hardware used for running the experiments, such as CPU or GPU models, or cloud computing resources with their specifications. |
| Software Dependencies | No | The paper mentions using "the VGG-19 model (Simonyan and Zisserman 2014)" and "the Google News-vectors-negative300 model (Mikolov et al. 2013)". These refer to models or pre-trained embeddings, not specific software libraries or frameworks with version numbers necessary for reproducibility of their code. |
| Experiment Setup | Yes | We cross-validate all hyper-parameters of our LIAD on the training set. ... K controls how many latent attributes are sufficient, and γ balances the weight between visual and semantic spaces. ... In Fig. ??fig-hyper P, we demonstrate how the two hyper-parameters can affect the ZSR performance. |