Boosted Zero-Shot Learning with Semantic Correlation Regularization
Authors: Te Pi, Xi Li, Zhongfei (Mark) Zhang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on two ZSL datasets show the superiority of BZ-SCR over the state-of-the-arts. |
| Researcher Affiliation | Collaboration | Te Pi1, Xi Li1,2 , Zhongfei (Mark) Zhang1 1Zhejiang University, Hangzhou, China 2Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Hangzhou, China |
| Pseudocode | Yes | Algorithm 1: Boosted Zero-shot classification with Semantic Correlation Regularization (BZ-SCR) |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code for the methodology described. It refers to external datasets but not its own code. |
| Open Datasets | Yes | We evaluate the performance of BZ-SCR on the two classic ZSL image datasets, Animal With Attributes1 (AWA) and Caltech-UCSD-Birds-2002 (CUB200). 1http://attributes.kyb.tuebingen.mpg.de/ 2http://www.vision.caltech.edu/visipedia/CUB-200-2011.html |
| Dataset Splits | Yes | For the class split, we adopt the default split (40/10 for train+val/test) for AWA and the same split as [Akata et al., 2013] (150/50 for train+val/test) for CUB200. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware used for running the experiments (e.g., GPU models, CPU types, or cloud configurations). |
| Software Dependencies | No | The paper does not provide specific version numbers for any key software components or libraries used in the implementation. |
| Experiment Setup | Yes | The shown results of BZ-SCR are achieved with ν/N = 10 4, β/N = 0.4 for AWA, and ν/N {0.025, 0.05}, β/N {0.1, 0.2} for CUB200. We implement a grid search for the tuning of the hyperparameters ν, β, and report the best performances. |