Value at Adversarial Risk: A Graph Defense Strategy against Cost-Aware Attacks
Authors: Junlong Liao, Wenda Fu, Cong Wang, Zhongyu Wei, Jiarong Xu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on four realworld datasets, we demonstrate that our method achieves superior performance surpassing state-of-the-art methods. |
| Researcher Affiliation | Academia | Junlong Liao1*, Wenda Fu1*, Cong Wang2, Zhongyu Wei1, Jiarong Xu1 1Fudan University 2Peking University |
| Pseudocode | Yes | Algorithm 1: Robust training for learning costs |
| Open Source Code | Yes | Our codes are available at: https://github.com/songwdfu/Ris Keeper. |
| Open Datasets | Yes | We use four commonly-used datasets to conduct our experiments, i.e., cora (Mccallum et al. 2000), citeseer (Sen et al. 2008), amazon computers and amazon photo (Shchur et al. 2018). |
| Dataset Splits | Yes | For cora and citeseer, we divide the training set, validation set, and test set according to the default setting (Sen et al. 2008). For amazon computers and amazon photo, the datasets are randomly split into training set (10%), validation set (10%), test set (80%). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. It only implies that models were trained. |
| Software Dependencies | No | The paper mentions using 'Adam algorithm (Kingma and Ba 2014)' but does not specify version numbers for any software dependencies, such as Python, PyTorch, or other libraries. |
| Experiment Setup | Yes | The number of hidden units is set to 32 for all hidden layers. A 1-layer MLP is attached to the end of the cost model. We employ Adam algorithm (Kingma and Ba 2014) with an initial learning rate of 0.01 to optimize models. For Cost-Aware PGD, the dropout rate is set to 0.5. Cross-entropy loss is used for L. To balance the differences between L and cost loss caused by varying numbers of attacked edges in different datasets, λ is set to 0.001|E| . Without loss of generalizability, the single node cost budget is set to 1, and the total node cost budget is set to 0.995|V |. |