Zero-shot Learning with Many Classes by High-rank Deep Embedding Networks

Authors: Yuchen Guo, Guiguang Ding, Jungong Han, Hang Shao, Xin Lou, Qionghai Dai

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several datasets including Image Net demonstrate GREEN significantly outperforms the state-of-the-art approaches. ... We summarize the comparison on SUN and Image Net (including its subsets) in Table 1. In Figure 3, we show the results on Image Net with different metrics.
Researcher Affiliation Academia Yuchen Guo1, Guiguang Ding2, Jungong Han3, Hang Shao4, Xin Lou5 and Qionghai Dai1 1Department of Automation, Tsinghua University, Beijing, China 2School of Software, Tsinghua University, Beijing, China 3WMG Data Science, University of Warwick, Coventry, UK 4Zhejiang Future Technology Institute (Jiaxing), Zhejiang, China 5Chinese PLA General Hospital, Beijing, China
Pseudocode No The paper describes the method using equations and text, but no explicit pseudocode or algorithm block is present.
Open Source Code No The paper states: 'We implement GREEN in Tensor Flow1. 1https://www.tensorflow.org/'. This link points to the TensorFlow library, not the authors' specific implementation code for GREEN. There is no explicit statement or link providing access to the source code for their method.
Open Datasets Yes The first is SUN [Patterson and Hays, 2012] scene recognition dataset. ... The other dataset is Image Net [Russakovsky et al., 2015] which is a really large-scale dataset with many classes.
Dataset Splits No The paper describes the training and test sets but does not explicitly provide details about a validation set split or the proportions for such splits (e.g., 80/10/10).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments.
Software Dependencies No The paper mentions implementation in 'Tensor Flow' but does not specify a version number for this or any other software dependency.
Experiment Setup Yes For both versions, we set the number of branches as B = 16. ... In particular, we set d = 256. To minimize the loss function in Eq. (5), we use mini-batch based stochastic gradient descent algorithm. The batch size is 128 and we train the model for 100k iterations. The initial learning rate is 0.01 and then 0.001 at the 70k-th iteration. For GREEN-D, we use Res Net-101 as the backbone.