Adversarial Attacks on Node Embeddings via Graph Poisoning
Authors: Aleksandar Bojchevski, Stephan Günnemann
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4. Experimental Evaluation |
| Researcher Affiliation | Academia | 1Technical University of Munich, Germany. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and data available at https://www.kdd.in.tum.de/node embedding attack. |
| Open Datasets | Yes | We analyze three datasets: Cora (N = 2810, |E| = 15962, Mc Callum et al. (2000); Bojchevski & G unnemann (2018)) and Citeseer (N = 2110, |E| = 7336, Giles et al. (1998)) are citation networks commonly used to benchmark embedding approaches, and Pol Blogs (N = 1222, |E| = 33428, Adamic & Glance (2005)) is a graph of political blogs. |
| Dataset Splits | No | The paper does not explicitly provide specific train/validation/test dataset splits or mention a validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | We set Deep Walk s hyperparameters to: T = 5, b = 5, K = 64 and use logistic regression for classification. |