Landmark Selection for Zero-shot Learning

Authors: Yuchen Guo, Guiguang Ding, Jungong Han, Chenggang Yan, Jiyong Zhang, Qionghai Dai

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several ZSL datasets including Image Net demonstrate the superiority of LAST to the state-of-the-arts. 4 Experiment 4.2 Result Table 1: (Generalized) ZSL performance comparison on benchmarks.
Researcher Affiliation Academia Yuchen Guo1, Guiguang Ding2, Jungong Han3, Chenggang Yan4, Jiyong Zhang4 and Qionghai Dai1 1Department of Automation, Tsinghua University, Beijing, China 2School of Software, Tsinghua University, Beijing, China 3WMG Data Science, University of Warwick, Coventry, UK 4School of Automation, Hangzhou Dianzi University, China
Pseudocode Yes Algorithm 1 Landmark Selection for ZSL Input: Training set {xi, yi}ns i=1, parameter λ, α, β, γ, δ; Output: The compatibility matrix W; Initialize: µi = δ; repeat Update W by solving Eq. (4); for one mini batch in training set do Update µi for this batch by solving Eq. (6); end for until Convergence; Return W;
Open Source Code No The paper does not provide any link or explicit statement about releasing the source code for the described methodology.
Open Datasets Yes We use five widely used ZSL benchmark datasets for evaluation. The first dataset is Animals with Attributes2 (Aw A2) [Xian et al., 2017]... The second dataset is a Pascal-a Yahoo (a PY) [Farhadi et al., 2009]... The third dataset is SUN [Patterson and Hays, 2012]... The fourth dataset is CUB [Wah et al., 2011]... The last dataset is Image Net [Russakovsky et al., 2015]...
Dataset Splits No For each dataset, some seen class images are used for model training and the other seen class images together with all unseen class images are utilized as the test set. For fair comparison, we make use of the same seen-unseen split, train-test split, image feature, and label feature given by Xian et al. [2017].
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions MATLAB functions like 'sylvester' and 'quadprog' but does not specify version numbers for MATLAB or any other software dependencies.
Experiment Setup Yes We set the parameters in Eq. (3) as: λ = 1, α = β = ns/10, γ = 0.01 and δ = 0.5. When using mini-batch based optimization, we set the mini batch size to 1, 024.