Generated Graph Detection

Authors: Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments evidence that all the models are qualified for generated graph detection, with specific models having advantages in specific scenarios.
Researcher Affiliation Collaboration 1CISPA Helmholtz Center for Information Security 2Stanford University 3Salesforce Research 4Net App.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Yvonnemamama/GGD
Open Datasets Yes We use 7 benchmark datasets from TUDataset (Morris et al., 2020) to evaluate the performance, including AIDS (Riesen & Bunke, 2008), Alchemy (Chen et al., 2019), Deezer ego nets (abbreviated as Deezer) (Rozemberczki et al., 2020), DBLP (DBL), Git Hub Star Gazer (abbreviated as Git Hub) (Rozemberczki et al., 2020), COLLAB (Yanardag & Vishwanathan, 2015) and Twitch ego nets (abbreviated as Twitch) (Rozemberczki et al., 2020). ... Our code is available at https://github.com/Yvonnemamama/GGD
Dataset Splits No The ratio of the training set and testing set is 8:2. The contrastive learning-based model is trained following the implementation details in Graph CL (You et al., 2020). The paper specifies a training and testing split but does not explicitly detail a separate validation set split or the methodology for hyperparameter validation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No We use the GCN to embed the graphs in the end-to-end classifier and metric learning-based model. The GCN is implemented in Py Torch (Py T). The optimizer we used is Adam optimizer (Kingma & Ba, 2015). The contrastive learning-based model is trained following the implementation details in Graph CL (You et al., 2020). While software components are mentioned, specific version numbers for libraries like PyTorch or Graph CL are not provided.
Experiment Setup Yes The optimizer we used is Adam optimizer (Kingma & Ba, 2015). Each model is trained for 200 epochs. The learning rate is set to 0.001 and we adopt Cross-Entropy Loss as the loss function. ... we conduct experiments to fine-tune the metric learning-based model and find the best Nps = 200,000 and Nk = 10 which makes the model perform the best.