Dual-View Ranking with Hardness Assessment for Zero-Shot Learning

Authors: Yuchen Guo, Guiguang Ding, Jungong Han, Xiaohan Ding, Sicheng Zhao, Zheng Wang, Chenggang Yan, Qionghai Dai8360-8367

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on benchmarks demonstrate that DARK outperforms the state-of-the-arts for (generalized) ZSL.
Researcher Affiliation Academia Department of Automation, School of Software, Tsinghua University, Beijing 100084, China School of Computing and Communications, Lancaster University, Lancaster, LA1 4YW, UK Department of EE and CS, UC Berkeley, USA; Department of CST, USTB, China Institute of Information and Control, Hangzhou Dianzi University, China
Pseudocode Yes Algorithm 1 Training DARK
Open Source Code No The paper does not provide any specific links or statements indicating that the source code for their proposed method (DARK) is openly available.
Open Datasets Yes In the experiment, we utilize four widely used standard benchmark datasets for ZSL. The first dataset is Animals with Attributes2 (Aw A2) (Xian et al. 2017)... The second dataset is a Pascal-a Yahoo (a PY) (Farhadi et al. 2009)... The third dataset is SUN (Patterson and Hays 2012)... The last dataset is CUB (Wah et al. 2011)... For fair comparison, we use the seen-unseen split, train-test split, image feature, and label feature given by Xian et al. (2017).
Dataset Splits Yes For each dataset, some seen class images are used for model training and the other seen class images together with all unseen class images are utilized as the test set. For fair comparison, we use the seen-unseen split, train-test split, image feature, and label feature given by Xian et al. (2017). The statistics are shown in Table 2.
Hardware Specification No The paper does not specify any hardware used for running the experiments (e.g., GPU models, CPU types, or cloud platforms with specifications).
Software Dependencies No The paper mentions 'Res Net-101 (He et al. 2016) pre-trained on Image Net is employed as feature extractor' but does not specify any software environments, libraries, or their version numbers (e.g., Python, PyTorch, TensorFlow, CUDA).
Experiment Setup Yes When computing density adaptive margin by Eq. (3), we set m = 0.5. For the regularization term, we set λ = 0.01. For mini-batch based gradient descent, each batch contains b = 512 images. The margin ϵ and weight are updated every 10 iterations since they change a little in one iteration. The parameter r for U and V are set to 64 consistently. The iteration in Algorithm 1 is conducted for 200 times. The learning rate (stepsize) τ is set to 0.01 initially and then to 0.001 at the 150-th iteration.