Semi-supervised Domain Adaptation in Graph Transfer Learning

Authors: Ziyue Qiao, Xiao Luo, Meng Xiao, Hao Dong, Yuanchun Zhou, Hui Xiong

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a range of publicly accessible datasets validate the effectiveness of our proposed SGDA in different experimental settings.
Researcher Affiliation Academia 1Jiangmen Laboratory of Carbon Science and Technology, Jiangmen 2The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 3Guangzhou HKUST Fok Ying Tung Research Institute, Guangzhou 4Computer Network Information Center, Chinese Academy of Sciences, Beijing 5University of Chinese Academy of Sciences, Beijing
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a specific repository link or an explicit statement about the release of its source code.
Open Datasets Yes We conduct experiments on three real-world graphs: ACMv9 (A), Citationv1 (C), and DBLPv7 (D) from Arnet Miner [Tang et al., 2008].
Dataset Splits Yes We randomly select 5% of nodes in the source graph as labeled nodes and others as unlabeled nodes while the target graph is completely unlabeled. ... We evaluate the performance of different methods with the label rate of the source graph as 1%, 5%, 7%, 9%, and 10%, respectively.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., libraries, frameworks, or solvers).
Experiment Setup Yes We set the loss wrights λ1 always as 1 and λ2 [0, 1] as a dynamic value that is linearly increased with the training epoch, i.e., λ2 = m/M where m is the current epoch and M is the maximum epoch. We randomly initialize the ξ under the uniform distribution U( ϵ, ϵ) and set the ϵ always as 0.5. We set the scale range α and β always as 0.8 and 1.2. We train SGDA for 200 epochs with the learning rate as 0.001, the weight decay as 0.001, and the dropout rate as 0.1 on all datasets.