Ordinal Zero-Shot Learning

Authors: Zengwei Huo, Xin Geng

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two typical ordinal classification problems, i.e., head pose estimation and age estimation, show that SIDL performs significantly better than the compared regression methods. Furthermore, SIDL appears much more robust against the increase of unseen labels than other compared baselines.
Researcher Affiliation Academia MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link for the availability of its source code.
Open Datasets Yes For head pose estimation, the datasets used in this experiment are the Pointing 04 database [Gourier et al., 2004] and the BJUT-3D Chinese Face database [Baocai et al., 2009]. For age estimation, the dataset is the MORPH database [Ricanek and Tamirat, 2006].
Dataset Splits Yes On each dataset, we randomly choose 2/3 labels as seen labels in the training phase. Then, we test on the data whose labels are the remaining 1/3 labels. ... Each algorithm undergoes a five-fold cross validation on each unseen label pattern.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'LIBSVM [Chang and Lin, 2011]' and 'MATLAB implementation (classregtree class in MATLAB)' but does not provide specific version numbers for these software components.
Experiment Setup Yes The SID is generated according to Eq. (2) and Σ = τ 2 1 0 0 τ 2 2 , where τ1 = τ2 = 30 for the Pointing 04 dataset, which are 2 times of the minimum angle interval (15 ). δ is set to (15, 15). These hyperparameters are obtained by cross validation. Similarly, we set τ1 = 20, τ2 = 30 on the BJUT-3D dataset and δ is (10, 15). ... on the Pointing 04 dataset, Kernel PLS uses the RBF kernel with the width of 3; Kernel SVR is implemented by by LIBSVM [Chang and Lin, 2011] using the RBF kernel with the parameter gamma of 0.01; On the BJUT-3D dataset, Kernel PLS uses the RBF kernel with the kernel width of 5; for Kernel SVR, we use RBF kernel with the gamma of 0.1. ... The SID of each face image is initialized using Eq. (2) with its mean at the chronological age and Σ degenerates to variance, which is set to 72. δ in this dataset is a 1-dimensional vector and is set to 1. ... For AAS, the error threshold in the appearance cluster training step is set to 3. For OHRank, the absolute cost function and the RBF kernel are used. Kernel PLS uses the RBF kernel with the width of 1. Kernel SVR uses the RBF kernel with the gamma of 15. CART is implemented as the regression type and set to the default values of the MATLAB implementation ( classregtree class in MATLAB).