Learning metrics for persistence-based summaries and applications for graph classification

Authors: Qi Zhao, Yusu Wang

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply the learned kernel to the challenging task of graph classification, and show that our WKPI-based classification framework obtains similar or (sometimes significantly) better results than the best results from a range of previous graph classification frameworks on benchmark datasets.
Researcher Affiliation Academia Qi Zhao Yusu Wang zhao.2017@osu.edu yusu@cse.ohio-state.edu Computer Science and Engineering Department The Ohio State University Columbus, OH 43221
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing their code or provide links to a code repository for their method.
Open Datasets Yes We use a range of benchmark datasets: (1) several datasets on graphs derived from small chemical compounds or protein molecules: NCI1 and NCI109 [44], PTC [24], PROTEIN [6], DD [21] and MUTAG [19]; (2) two datasets on graphs representing the response relations between users in Reddit: REDDIT-5K (5 classes) and REDDIT-12K (11 classes) [48]; and (3) two datasets on IMDB networks of actors/actresses: IMDB-BINARY (2 classes), and IMDB-MULTI (3 classes).
Dataset Splits Yes The 10 × 10-fold nested cross validation are applied to evaluate our algorithm: There are 10 folds in outer loop for evaluation of the model with selected hyperparameters and 10 folds in inner loop for hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions general programming environments like Python and Matlab, but does not provide specific version numbers for these or any other ancillary software components or libraries.
Experiment Setup Yes Specifically, we search among m ∈ {3, 4, 5, 6, 7, 8} and σ ∈ {0.001, 0.01, 0.1, 1, 10, 100}. The 10 × 10-fold nested cross validation are applied to evaluate our algorithm: There are 10 folds in outer loop for evaluation of the model with selected hyperparameters and 10 folds in inner loop for hyperparameter tuning. ... Our optimization procedure terminates when the change of the cost function remains 10−4 or the iteration number exceeds 2000. In our implementation, we use Armijo-Goldstein line search scheme to update the parameters in each (stochastic) gradient decent step.