Improving the Robustness of Wasserstein Embedding by Adversarial PAC-Bayesian Learning

Authors: Daizong Ding, Mi Zhang, Xudong Pan, Min Yang, Xiangnan He3791-3800

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluations, we conduct extensive experiments to demonstrate the effectiveness and robustness of our proposed embedding model compared with the state-of-the-art methods. ... We conduct extensive experiments to validate the effectiveness and robustness of RAWEN, compared with the state-of-the-art node embedding methods. ... In this section, we compare our proposed RAWEN with the state-of-the-art node embedding methods in terms of effectiveness and robustness. In particular, our main research questions are RQ1. Is RAWEN an effective node embedding method? RQ2. Is RAWEN a robust node embedding method? RQ3. What is the influence of different Q? Experiment Settings We validate the effectiveness of our embedding on two benchmark tasks, i.e. multi-label node classification and link prediction tasks. For each run of experiment, we conduct a 10-fold cross validation on each dataset and report the average results.
Researcher Affiliation Academia Daizong Ding,1 Mi Zhang, 1 Xudong Pan,1 Min Yang,1 Xiangnan He2 1School of Computer Science, Fudan University 2School of Information Science and Technology, University of Science and Technology of China {17110240010, mi zhang, 18110240010, m yang}@fudan.edu.cn, xiangnanhe@gmail.com
Pseudocode Yes Algorithm 1 Adversarial Training Strategy Require: Input Ω, regularizing coefficient λ1, λ2 and learning rate repeat Random choose yij Output parameters of q(z|vi), q(z|vj) by Qθ Sample z(t) from prior p(z) Sample z(t) i , z(t) j from q(z|vi), q(z|vj) by q(z|vj) by SGVB Update φ, γ by maximizing the loss function below with the Adam optimizer (Kingma and Ba 2015), λ2L2(φ) + λ1L1(θ, γ) Update θ by minimizing the loss function below with the Adam optimizer ˆℓ(θ) + λ1L1(θ, γ) until Convergence
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the described methodology.
Open Datasets Yes We validate the expressiveness of our node embedding framework on the following public graph datasets of various scale. For link prediction, we use Wiki-Vote, Epinions and Google datasets, which respectively contain 2846, 5488, 44000 nodes and 184376, 279480, 445618 edges. For node classification we use Email and Wiki dataset, which respectively contain 1005, 19933 nodes and 25571, 1003686 edges.
Dataset Splits Yes For each run of experiment, we conduct a 10-fold cross validation on each dataset and report the average results.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions optimizers and estimation techniques like "Adam optimizer (Kingma and Ba 2015)" and "SGVB estimator with the reparameterization trick (Kipf and Welling 2016b)", but does not specify software or library versions (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes For each methods, we set the embedding size as 20, 30, 50 for Wiki-vote, Epinions and Google respectively and set the batch size as 200 in each case. For our model, the learning rate is set as 0.002, sample size T as 10 and the regularization coefficient as 0.1.