Structural Re-weighting Improves Graph Domain Adaptation

Authors: Shikun Liu, Tianchun Li, Yongbin Feng, Nhan Tran, Han Zhao, Qiang Qiu, Pan Li

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A novel approach, called structural reweighting (Stru RW), is proposed to address this issue and is tested on synthetic graphs, four benchmark datasets, and a new application in HEP. Stru RW has shown significant performance improvement over the baselines in the settings with large graph structure shifts, and reasonable performance improvement when node attribute shift dominates.
Researcher Affiliation Collaboration 1Department of Electrical and Computer Engineering, Georgia Institute of Technology, Georgia, U.S.A 2Department of Electrical and Computer Engineering, Purdue University, West Lafayette, U.S.A 3Fermi National Accelerator Laboratory, Batavia, U.S.A 4Department of Computer Science, University of Illinois Urbana Champaign, Champaign, U.S.A.
Pseudocode Yes The pseudo codes are presented in Algorithm 1.
Open Source Code Yes Our code is available at: https://github.com/ Graph-COM/Stru RW
Open Datasets Yes DBLP and ACM are two paper citation networks obtained from DBLP and ACM respectively. Each node represents a paper, and each edge indicates a citation between two papers... The original networks are provided by Arnet Miner (Tang et al., 2008).
Dataset Splits Yes Specifically, we use 20 percent of node labels in the target domain for validation, and the rest 80 percent are held out for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments. It only discusses model architectures and hyperparameters.
Software Dependencies No The paper mentions using GCN as a backbone and compares against other methods like DANN and Mixup, but it does not specify version numbers for any software libraries, frameworks, or dependencies used in its implementation.
Experiment Setup Yes Besides the normal hyperparameter tuning including learning rate, model architecture, and epoch as some basic setups, our Stru RW relies on three hyperparameters: the epoch m to start Stru RW, the time period t for weight update and the λ for the degree to adopt the reweighted message. The specific values of these hyperparameters and some baselines hyperparameters are reported in Appendix E.