Characterizing the Influence of Graph Elements
Authors: Zizhang Chen, Peizhao Li, Hongfu Liu, Pengyu Hong
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted three major experiments: (1) Validate the estimation accuracy of our influence functions on graph in Section 5.2; (2) Utilize the estimated edge influence to carry out adversarial attacks and graph rectification for increasing model performance in Section 5.3; and (3) Utilize the estimated node influence to carry out adversarial attacks on GCN (Kipf & Welling, 2017) in Section 5.4. |
| Researcher Affiliation | Academia | Zizhang Chen, Peizhao Li, Hongfu Liu, Pengyu Hong Brandeis University {zizhang2,peizhaoli,hongfuliu,hongpeng}@brandeis.edu |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is publicly available at https://github.com/Cyrus9721/Characterizing_ graph_influence. |
| Open Datasets | Yes | We choose six real-world graph datasets:Cora, Pub Med, Cite Seer (Sen et al., 2008), Wi Ki CS (Mernyei & Cangea, 2020), Amazon Computers, and Amazon Photos (Shchur et al., 2018) in our experiments. |
| Dataset Splits | Yes | For the Cora, Pub Med, and Cite Seer datasets, we used their public train/val/test splits. For the Wiki-CS datasets, we took a random single train/val/test split provided by Mernyei & Cangea (2020). For the Amazon datasets, we randomly selected 20 nodes from each class for training, 30 nodes from each class for validation and used the rest nodes in the test set. |
| Hardware Specification | No | Table 2: Grey-box attacks to GCN via edge removals. A lower performance indicates a more successful attack. The best attacks are in bold font. The number following the dataset name is the preattack performance. denotes an out-of-memory issue encountered on GPU with 24GB VRAM. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as libraries, frameworks, or programming languages. |
| Experiment Setup | No | The paper mentions fine-tuning the model ("fine-tune it on the public split validation set") and describes general experimental procedures for attacks and rectification. However, it does not explicitly state specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training configurations in the main text. |