Open-Set Graph Domain Adaptation via Separate Domain Alignment
Authors: Yu Wang, Ronghang Zhu, Pengsheng Ji, Sheng Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real-world datasets show that our method outperforms existing methods on graph domain adaptation. |
| Researcher Affiliation | Collaboration | Yu Wang1*, Ronghang Zhu2 , Pengsheng Ji3, Sheng Li4 1 Linked In Corporation 2 School of Computing, University of Georgia 3 Department of Statistics, University of Georgia 4 School of Data Science, University of Virginia |
| Pseudocode | Yes | Our method procedure is summarized in Algorithm 1. |
| Open Source Code | No | The paper does not provide any explicit link or statement about open-sourcing its code. |
| Open Datasets | Yes | Datasets: We leverage three commonly used paper citation networks, i.e., ACMv9 (between years 2000 and 2010), DBLPv7 (between years 2004 and 2008), and Citationv1(before the year 2008), provided by Arnet Miner (Tang et al. 2008) and construct graphs based on these citation networks. |
| Dataset Splits | No | The paper describes its experimental protocols and label space settings but does not specify the train/validation/test dataset splits (e.g., percentages or exact counts) for reproducibility. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU/CPU models, memory, or specific computing environments used for the experiments. |
| Software Dependencies | No | Our implementation is based on Pytorch (Paszke et al. 2019). The paper mentions PyTorch but does not provide a specific version number or other software dependencies with versioning. |
| Experiment Setup | Yes | Both GCNl and GCNg are two-layer structures, the hidden dimensions for two layers in GCNl and GCNg are set as 128 and 16. The dropout rate is defined as 0.5. To ensure a fair comparison, identical dimensions are set for other baseline models. We optimize the model for 100 epochs by using Adam optimizer with learning rate of 0.005, momentum of 0.9, and weight decay of 5 10 4. The hyper-parameters γ, τ and β are set as log (3) 2 , 0.05, and 0.05, respectively. The number of steps k in PPMI matrix for GCNg is defined as 3, which is the same as ASN. |