Structural Fairness-aware Active Learning for Graph Neural Networks

Authors: Haoyu Han, Xiaorui Liu, Li Ma, MohamadAli Torkamani, Hui Liu, Jiliang Tang, Makoto Yamada

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed method not only improves the GNNs performance but also paves the way for more fair results.
Researcher Affiliation Collaboration 1Michigan State University 2North Carolina State University 3Shanghai Jiaotong University 4Amazon 5Okinawa Institute of Science and Technology
Pseudocode Yes Algorithm 1 Algorithm of SCARCE-Structure.
Open Source Code Yes The code is available at https://anonymous.4open.science/r/SCARCE-D804/.
Open Datasets Yes We perform experiments utilizing six widely used real-world graph datasets, encompassing three citation datasets, i.e., Cora, Citeseer, and Pubmed (Sen et al., 2008), two co-purchase datasets from Amazon, i.e., Computers and Photo (Shchur et al., 2018), and one OGB dataset, i.e., ogbn-arxiv (Hu et al., 2020).
Dataset Splits No Due to the lack of validation set in the AL setup, we train the GNN model with fixed 300 epochs and evaluate over the full graph.
Hardware Specification No The paper does not specify the exact hardware used (e.g., specific GPU or CPU models).
Software Dependencies No The paper mentions GNN models like GCN, APPNP, GAT, GCNII, and activation functions like ReLU, but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Learning Rate Dropout Rate Weight Decay Hidden Size Epochs Activation Function 0.01 0.5 0.0001 16 300 Re LU