EMD Metric Learning
Authors: Zizhao Zhang, Yubo Zhang, Xibin Zhao, Yue Gao
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results have shown better performance of our proposed EMD metric learning method compared with the traditional EMD method and the state-of-the-art methods. It is noted that the proposed EMD metric learning method can be also used in other applications. |
| Researcher Affiliation | Academia | Key Laboratory for Information System Security, Ministry of Education Tsinghua National Laboratory for Information Science and Technology School of Software, Tsinghua University, China. {zz-zh14,zhangyb17}@mails.tsinghua.edu.cn {zxb,gaoyue}@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 EMD Metric Learning |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We have applied our EMD metric learning method on two tasks, i.e., multi-view object classification and document classification, and experiments are conducted on two public benchmarks, including the National Taiwan University (NTU) 3D model dataset (Chen et al. 2003) and the Twitter Sentiment Corpus dataset (Sanders 2011). |
| Dataset Splits | No | The paper specifies training and testing splits ("We randomly select 20%, 30%, 40% and 50% of all data per each category as labeled training data and all the rest are used for testing."), but it does not mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions several techniques and models (e.g., CNN feature, word2vec, Mahalanobis distance, Hungarian algorithm) but does not provide specific version numbers for any software or libraries used. |
| Experiment Setup | Yes | We empirically set the parameter μ in Eq. (6) to 0.1 on both datasets. The parameter λ is set to 0.5 on the NTU dataset, and 0.2 on the TWITTER dataset. ... Here we fix kg as 3 and vary ki in the range of [2, 16] on the NTU dataset, and [1, 8] on the TWITTER dataset, with 30% training data. |