DropMessage: Unifying Random Dropping for Graph Neural Networks

Authors: Taoran Fang, Zhiqing Xiao, Chunping Wang, Jiarong Xu, Xuan Yang, Yang Yang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate our proposed method, we conduct experiments that aims for multiple tasks on five public datasets and two industrial datasets with various backbone models. The experimental results show that Drop Message has the advantages of both effectiveness and generalization, and can significantly alleviate the problems mentioned above.
Researcher Affiliation Collaboration 1Zhejiang University 2Fin Volution Group 3Fudan University
Pseudocode No The paper describes its algorithm in text and mathematical formulas but does not include structured pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper mentions 'A detailed version with full appendix can be found on ar Xiv: https://arxiv.org/abs/2204.10037', but this link is for the paper itself and there is no unambiguous statement that the authors are releasing the code for the work described in this paper, nor a direct link to a source-code repository.
Open Datasets Yes We employ 7 graph datasets in our experiments, including 5 public datasets Cora, Cite Seer, Pub Med, ogbn-arxiv, Flickr and 2 industrial datasets Fin V, Telecom. Cora, Cite Seer, Pub Med, ogbn-arxiv: These 4 different citation networks are widely used as graph benchmarks (Sen et al. 2008; Hu et al. 2020). Flickr: It is provided by Flickr, the largest photo-sharing website (Zeng et al. 2020).
Dataset Splits No The paper mentions evaluating on datasets for node classification and link prediction tasks, but it does not provide specific dataset split information such as exact percentages, sample counts, or explicit references to predefined splits used for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We fix Dropout, Drop Edge, and Drop Node on the initial input and fix Drop Message at the start point of the message propagation process... In (backbone)nodewise settings, we set the dropping rate to be equal to its upper bound δi = 1 1 di for each node. In (backbone)average settings, we set the dropping rate δi = 0.75 + ϵi, where ϵi Uniform( 0.15, 0.15).