Initializing Then Refining: A Simple Graph Attribute Imputation Network
Authors: Wenxuan Tu, Sihang Zhou, Xinwang Liu, Yue Liu, Zhiping Cai, En Zhu, Changwang Zhang, Jieren Cheng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four benchmark datasets verify the superiority of ITR against state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Wenxuan Tu1 , Sihang Zhou1 , Xinwang Liu1 , Yue Liu1 , Zhiping Cai1 , En Zhu1 , Changwang Zhang2 and Jieren Cheng3 1National University of Defense Technology, Changsha, China 2Tencent Technology, Shenzhen, China 3Hainan University, Haikou, China |
| Pseudocode | No | The paper includes mathematical equations and a framework diagram, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not contain any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We conduct experiments to evaluate the proposed ITR on four benchmark datasets, including Cora [Mc Callum et al., 2000], Citeseer [Sen et al., 2008], Amazon Computer (Amac), and Amazon Photo (Amap) [Shchur et al., 2018]. |
| Dataset Splits | Yes | Specifically, 1) in the profiling task, we randomly sample 40% nodes with attributes as the training set, and manually mask all attributes of the rest of 10% and 50% nodes (i.e., attribute-missing samples) as the validation set and the test set, respectively. ... 2) in the node classification task, the restored attributes are randomly split into 80% and 20% for training and testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using the Adam optimization algorithm and a GCN-based framework but does not specify the versions of any software libraries or dependencies used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We employ a symmetric backbone framework consisting of 4-layer GCNs and optimize it with the Adam optimization algorithm, where we set the learning rate to 1e-3. Moreover, the learning rate, the latent dimension, the dropout rate, and the weight decay are set to 1e-3, 64, 0.5, and 5e-4, respectively. |