Joint Domain Adaptive Graph Convolutional Network
Authors: Niya Yang, Ye Wang, Zhizhi Yu, Dongxiao He, Xin Huang, Di Jin
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation on diverse real-world datasets substantiates the superiority of our proposed method, marking a significant advancement over existing state-of-the-art graph domain adaptation algorithms. |
| Researcher Affiliation | Academia | Niya Yang1 , Ye Wang2 , Zhizhi Yu1 , Dongxiao He1 , Xin Huang3 and Di Jin1 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China 3Department of Computer Science, Hong Kong Baptist University, Hong Kong, China |
| Pseudocode | No | The paper describes methods through mathematical equations and textual explanations, but does not include a structured pseudocode block or algorithm figure. |
| Open Source Code | No | The paper does not provide any statement or link regarding the release of its source code. |
| Open Datasets | Yes | We adopt two-category graphs from Citation [Li et al., 2015] and Blog [Li et al., 2015] to evaluate the performance of our proposed JDA-GCN, as shown in Table 1. |
| Dataset Splits | No | The paper states 'We use all labeled source samples and all unlabeled target samples' but does not specify explicit training, validation, and test dataset splits (e.g., percentages, counts, or standard split names) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions 'implemented in Pytorch' but does not specify a version number or other software dependencies with specific versions. |
| Experiment Setup | Yes | For our proposed JDA-GCN, we set the hidden layers in both source and target networks from 128 to 16, the dropout for each GCN layer to 0.3, and a fixed learning rate to 1e-4. In addition, we set balance parameters γ1 = 1 and γ2 = 0.8 on Citation and Blog, respectively, and set balance parameter β = 0.9 on Citation and set β = 0.95 on Blog. |