Explainable Automated Graph Representation Learning with Hyperparameter Importance

Authors: Xin Wang, Shuyi Fan, Kun Kuang, Wenwu Zhu

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the superiority of our proposed e-Auto GR model against state-of-the-art methods in terms of both model performance and hyperparameter importance explainability. We conduct extensive experiments on several real-world datasets to demonstrate the superiority of e Auto GR over various state-of-the-art methods in terms of both model explainability and performance.
Researcher Affiliation Academia 1Department of Computer Science and Technology, Tsinghua University, Beijing, China 2College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
Pseudocode Yes Algorithm 1 Hyperparameter Decorrelation Weighting Regression (Hyper Deco) Algorithm 2 Explainable Automated Graph Representation (e-Auto GR)
Open Source Code No The paper provides GitHub links in footnotes for third-party graph representation algorithms (Deep Walk, AROPE, GCN) that were used for testing, but it does not provide an explicit statement or link for the source code of its own proposed methodology (e-Auto GR or Hyper Deco).
Open Datasets Yes Our experiments are performed on three real-world datasets: i) Blog Catalog4 is a social network... ii) Wikipedia5 is a word co-occurrence network... iii) Pubmed6 is a citation network... Footnotes provide URLs: 4http://socialcomputing.asu.edu/datasets/Blog Catalog3 5https://snap.stanford.edu/node2vec/ 6https://github.com/tkipf/gcn/tree/master/gcn/data
Dataset Splits No The paper mentions withholding data for testing, but does not explicitly specify a separate validation set or split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in their implementation.
Experiment Setup Yes For Deep Walk, the number of random walks is considered between 2 and 20, the length of each random walk is considered between 2 and 80 and the window size is considered between 2 and 20. For AROPE, the weights of different order proximity are considered chosen between 0.0001 and 0.1. For GCN, the number of training epochs is chosen between 2 and 300, the number of neurons in each layer is chosen between 2 and 64, the learning rate is chosen between 0.0001 and 0.1, the dropout rate is chosen between 0.1 and 0.9 and the norm-2 regularizations chosen between 0.00001 and 0.001.