Masked Graph Convolutional Network
Authors: Liang Yang, Fan Wu, Yingkui Wang, Junhua Gu, Yuanfang Guo
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on transductive and inductive node classification tasks have demonstrated the superiority of the proposed method. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Hebei University of Technology, China 2Hebei Province Key Laboratory of Big Data Calculation, Hebei University of Technology, China 3College of Intelligence and Computing, Tianjin University, China 4School of Computer Science and Engineering, Beihang University, China |
| Pseudocode | No | The paper provides mathematical formulations and descriptions of its model and algorithms, but it does not include any explicit pseudocode blocks or sections labeled 'Algorithm'. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | Yes | For the transductive learning task, the experiments are conducted on three commonly utilized citation networks, Cora, Cite Seer and Pub Med, as shown in Table 2. Besides, another bipartite large network, NELL, is constructed from a knowledge graph as shown in Table 2. For the inductive learning task, the protein-protein interaction (PPI) dataset [Zitnik and Leskovec, 2017] is employed. |
| Dataset Splits | Yes | In each citation network, 20 nodes per class, 500 nodes and 1000 nodes are employed for training, validation and performance assessment, respectively. Algorithms are trained on 20 graphs, validated on 2 graphs and tested on 2 graphs, accordingly. |
| Hardware Specification | No | The paper does not specify any details regarding the hardware used for running the experiments, such as specific GPU models, CPU types, or memory configurations. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation of the described methods or experiments. |
| Experiment Setup | No | The paper describes the model formulation and training process in general terms but does not provide concrete details about hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings used in the experiments. |