Chasing All-Round Graph Representation Robustness: Model, Training, and Optimization

Authors: Chunhui Zhang, Yijun Tian, Mingxuan Ju, Zheyuan Liu, Yanfang Ye, Nitesh Chawla, Chuxu Zhang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform comprehensive experiments on the graph robustness benchmarks to demonstrate the effectiveness of our proposed GAME model against adversarial graphs with complex distributions.
Researcher Affiliation Academia 1Brandeis University, {chunhuizhang,zheyuanliu,chuxuzhang}@brandeis.edu 2University of Notre Dame, {yijun.tian,mju2,yye7,nchawla}@nd.edu
Pseudocode No The paper describes the method and training procedure in text and mathematical formulas but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The code is provided this anonymous link 1. 1https://tinyurl.com/game23code
Open Datasets Yes We utilize Graph Robust Benchmark (Zheng et al., 2021) dataset to evaluate our model s performance by graphs with varying scales, including grb-cora (small-scale), grb-citeseer (smallscale), grb-flickr (medium-scale), grb-reddit (large-scale), and grb-aminer (large-scale). To utilize grb-cora, grb-citeseer, grb-flickr, grb-reddit, grb-aminer, we apply the tool provided by Graph Robustness Benchmark 2. 2https://github.com/thudm/grb
Dataset Splits Yes We run 10 times for mean results/standard deviation and the train:val:test split is 0.6:0.1:0.3.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper mentions 'Adam' as an optimizer but does not provide specific version numbers for programming languages, libraries, or frameworks (e.g., Python, PyTorch, CUDA).
Experiment Setup Yes The hyper-parameters of GAME are shown in Table 3. The Hyper-parameters for adversarial training used in DECOG are included in Table 4. Table 3: Hyper-parameters of GAME for grb-cora, grb-citeseer, grb-flickr, grb-reddit and grb-aminer datasets. The n and k represent the number of total experts and activated experts in each layer, respectively. Note that during generating adversarial samples, we activate all experts. Noisy rate controls the randomness when the gate module activates the partial experts during minimizing the loss. Table 4: Hyper-parameters of adversarial training in DECOG for grb-cora, grb-citeseer, grb-flickr, grb-reddit and grb-aminer datasets. Noisy rate controls the randomness when the gate module maximizes the loss and activates the partial experts. Nodes represents the number of injected nodes, and Edges indicates the number of added edges.