Zero Shot Learning via Low-rank Embedded Semantic AutoEncoder

Authors: Yang Liu, Quanxue Gao, Jin Li, Jungong Han, Ling Shao

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on six benchmark datasets demonstrate its superiority over several state-of-the-art algorithms.
Researcher Affiliation Collaboration Yang Liu1, Quanxue Gao1 , Jin Li2, Jungong Han3, Ling Shao4 1 State Key Laboratory of Integrated Services Networks, Xidian University, Xi an, China 2 Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University, Xi an, China 3 School of Computing and Communications, Lancaster University, United Kingdom 4 Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
Pseudocode Yes Algorithm 1: Input Training set {X, A}. Initialize W = I repeat 1. Update H by H = ( W(W)T + ηI) 1 2 . 2. Update W by solving the Eq. (5) and Eq. (6). until converge Output projection matrix W.
Open Source Code No The paper mentions a link to a zero-shot learning benchmark, but no explicit statement or link for the authors' own source code for the proposed method.
Open Datasets Yes We will validate our proposed method on five small-scale datasets (SUN, CUB, AWA1, AWA2 and APY) and one large-scale dataset (Image Net)... SUN Attribute (SUN) [Patterson et al., 2014]... CUB-200-2011 Birds (CUB) [Wah et al., 2011]... Animals with Attributes 1 (AWA1) [Lampert et al., 2014]... Animals with Attributes 2 (AWA2) [Xian et al., 2017]... A Pascal and Yahoo (a PY) [Farhadi et al., 2009]... Image Net [Russakovsky et al., 2015]
Dataset Splits Yes A Pascal and Yahoo (a PY) [Farhadi et al., 2009] is a smallscale coarse-grained dataset with 64 attributes. Among the total number of 32 classes, 20 Pascal classes (we randomly select 5 for validation) and 12 Yahoo classes are used for training and testing, respectively.
Hardware Specification No The paper mentions using Res Net and Google Net features, but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for the experiments.
Software Dependencies No The paper mentions that an implementation can be done in MATLAB, but does not provide specific version numbers for MATLAB or any other software dependencies.
Experiment Setup No The paper analyzes the effect of two parameters, alpha and beta, and provides their empirical ranges. However, it does not provide specific values for other crucial experimental setup details such as learning rate, batch size, optimizer, or number of epochs.