UNR-Explainer: Counterfactual Explanations for Unsupervised Node Representation Learning Models

Authors: Hyunju Kang, Geonhee Han, Hogun Park

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed method demonstrates superior performance on diverse datasets for unsupervised Graph SAGE and DGI. Our codes are available at https://github.com/hjkng/unrexplainer. 5 EXPERIMENTS 5.1 DATASETS 5.2 BASELINE METHODS 5.3 EVALUATION METRICS 5.4 RQ1: PERFORMANCE OF UNR-EXPLAINER AND OTHER BASELINE MODELS
Researcher Affiliation Academia Hyunju Kang, Geonhee Han, Hogun Park Department of Artificial Intelligence Sungkyunkwan University Suwon, Republic of Korea {neutor,gunhee8178,hogunpark}@skku.edu
Pseudocode Yes Algorithm 1 UNR-Explainer with restart Algorithm 2 Importance(funsup( ),G,Gs,v,k)
Open Source Code Yes Our codes are available at https://github.com/hjkng/unrexplainer.
Open Datasets Yes The experiments are conducted on three synthetic datasets (Ying et al., 2019; Luo et al., 2020) and three real-world datasets from PyTorch-Geometrics (Fey & Lenssen, 2019). The synthetic datasets (BA-Shapes, Tree-Cycles, and Tree-Grid (Ying et al., 2019)) are used... The real-world datasets (Cora, CiteSeer, and PubMed (Grover & Leskovec, 2016)) are citation networks... Additionally, we exploit the NIPS dataset from Kaggle to employ the case study of our method. (Hamner, 2017)
Dataset Splits No For the node classification task, we use the synthetic datasets and divide the embedding vector into a random train and test subset with 80% and 20% of the data, respectively. For the real-world datasets, we perform the link prediction task, so we split the edges of the graph into random train and test subsets with 90% and 10% of the data.
Hardware Specification No The paper does not explicitly describe the hardware used for experiments. There are no mentions of specific GPU models, CPU models, or cloud computing instances.
Software Dependencies Yes D.9 PACKAGES REQUIRED FOR IMPLEMENTATIONS python == 3.9.7 pytorch == 1.13.1 pytorch-cluster == 1.6.0 pyg == 2.2.0 pytorch-scatter == 2.1.0 pytorch-sparse == 0.6.16 cuda == 11.7.1 numpy == 1.23.5 tensorboardx == 2.2 networkx == 3.0 scikit-learn == 1.1.3 scipy == 1.9.3 pandas == 1.5.2
Experiment Setup Yes D.5 EXPERIMENTAL SETUP ...We set the hyperparameters as follows: the batch size as 256, the number of hidden dimensions as 64, the number of hidden layers as 2, the dropout as 0.5, and the optimizer as Adam. On BA-shapes, we set the number of epochs as 100 and the learning rate as 0.01. On other datasets, we set the number of epochs as150 and the learning rate as 0.01. D.7 IMPLEMENTATION OF UNSUPERVISED NODE REPRESENTATION MODELS