Towards Generalized and Efficient Metric Learning on Riemannian Manifold
Authors: Pengfei Zhu, Hao Cheng, Qinghua Hu, Qilong Wang, Changqing Zhang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the proposed method on three tasks, including object recognition, video based face recognition and material classification. ... Table 1 shows accuracies of different methods on five datasets. ... Table 2: Comparison of training time (s) on five datasets |
| Researcher Affiliation | Academia | Pengfei Zhu, Hao Cheng, Qinghua Hu , Qilong Wang, Changqing Zhang School of Computer Science and Technology, Tianjin University, Tianjin 300350, China zhupengfei@tju.edu.cn, huqinghua@tju.edu.cn |
| Pseudocode | Yes | Algorithm 1 RMML-SPD and RMML-GM Algorithms |
| Open Source Code | No | The paper does not provide any statement about releasing their source code or a link to it. |
| Open Datasets | Yes | Datasets. We conduct experiments on five datasets, including ETH-80 [Leibe and Schiele, 2003], Flickr Material dataset [Sharan et al., 2009], and UIUC material [Liao et al., 2013], You Tube Celebrities [Kim et al., 2008], and You Tube Face dataset [Wolf et al., 2011]. |
| Dataset Splits | Yes | Following the experimental settings in [Wang et al., 2012], we randomly choose 5 objects as gallery and the other 5 objects as probes in each category... Following the common setting in [Wang et al., 2012; Huang et al., 2015b], we randomly select 3 image sets per subject for gallery and 6 image sets for probes... 5000 video pairs are used to perform ten-fold cross validation tests. In each fold there are 500 pairs... We simply set λ to 0.1 on all datasets, and set t from {0.2, 0.4, 0.6, 0.8} by cross-validation on the training set. |
| Hardware Specification | Yes | The experiments are run on a PC equipped with a single Intel(R) Core(TM) i7-6700 (3.40GHz). |
| Software Dependencies | No | The paper mentions general software concepts (e.g., VGG-VD16 model, k LDA, k PLS) but does not provide specific version numbers for any programming languages, libraries, or frameworks used for implementation or experimentation. |
| Experiment Setup | Yes | Parameters setting. For metric learning on SPD manifold, we first compute mean vector µ and sample covariance S of a set of data to obtain a Gaussian descriptor... For LEML, η is tuned from 0.001 to 1000 and the value of ζ is tuned from 0.1 to 1. For GGDA, the graph parameter v is set from 1 to 10 and the size of projection matrix r is set to (c 1). Besides, its parameter β is tuned from 1e2 to 1e6. There are two parameters λ and t for RMML. We simply set λ to 0.1 on all datasets, and set t from {0.2, 0.4, 0.6, 0.8} by cross-validation on the training set. |