Persistence Fisher Kernel: A Riemannian Manifold Kernel for Persistence Diagrams
Authors: Tam Le, Makoto Yamada
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Throughout experiments with many different tasks on various benchmark datasets, we illustrate that the PF kernel compares favorably with other baseline kernels for PDs. |
| Researcher Affiliation | Academia | Tam Le RIKEN Center for Advanced Intelligence Project, Japan tam.le@riken.jp Makoto Yamada Kyoto University, Japan RIKEN Center for Advanced Intelligence Project, Japan makoto.yamada@riken.jp |
| Pseudocode | Yes | Algorithm 1 Compute d FIM for persistence diagrams |
| Open Source Code | Yes | Source code for Algorithm 1 can be obtained in http://github.com/lttam/Persistence Fisher. |
| Open Datasets | Yes | It is a synthesized dataset proposed by [Adams et al., 2017] ( 6.4.1) for linked twist map... We consider a 10-class subset7 of MPEG7 object shape dataset [Latecki et al., 2000]. ... granular packing system [Francois et al., 2013] and Si O2 [Nakamura et al., 2015] datasets. |
| Dataset Splits | No | The paper mentions training and testing splits, but does not explicitly describe a validation set split. For example, it states "We randomly split 70%/30% for training and test, and repeated 100 times." |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments, such as GPU/CPU models or memory amounts. |
| Software Dependencies | No | The paper mentions "Libsvm (one-vs-one) [Chang and Lin, 2011]" and "the DIPHA toolbox6" but does not specify their version numbers for reproducibility. |
| Experiment Setup | Yes | For hyper-parameters, we typically choose them through cross validation. For baseline kernels, we follow their corresponding authors to form sets of hyper-parameter candidates, and the bandwidth of the Gaussian kernel in (Prob + k G) and (Tang + k G) is chosen from 10{ 3:1:3}. For the Persistence Fisher kernel, there are 2 hyper-parameters: t (Equation (4)) and σ for smoothing measures (Equation (1)). We choose 1/t from {q1, q2, q5, q10, q20, q50} where qs is the s% quantile of a subset of Fisher information metric between PDs, observed on the training set, and σ from 10 3:1:3 . For SVM, we use Libsvm (one-vs-one) [Chang and Lin, 2011] for multi-class classification, and choose a regularization parameter of SVM from 10 2:1:2 . |