DAG-Aware Variational Autoencoder for Social Propagation Graph Generation

Authors: Dongpeng Hou, Chao Gao, Xuelong Li, Zhen Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide a comprehensive evaluation of DAVA, one focus is the effectiveness of generated data in improving the performance of downstream tasks. During the generation process, we discover the Credibility Erosion Effect by modifying the generation rules, revealing a social phenomenon in social network propagation.
Researcher Affiliation Academia 1School of Mechanical Engineering, Northwestern Polytechnical University 2School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University 3School of Cybersecurity, Northwestern Polytechnical University
Pseudocode Yes Algorithm 1: BFS Permutation with Node Importance
Open Source Code Yes And the code is available at https://github.com/cgaocomp/DAVA.
Open Datasets Yes We used three datasets collected from two real-world social media platforms, Weibo and Twitter, for graph generation, namely Weibo (Ma, Gao, and Wong 2017), Twitter15, and Twitter16 (Liu et al. 2015; Ma et al. 2016).
Dataset Splits No The paper does not provide explicit training/validation/test dataset splits with percentages or sample counts for the main model training. It only mentions '9/10 of the Twitter propagation data and test on the remaining 1/10' for a specific downstream task evaluation.
Hardware Specification No The paper mentions 'hardware limitations' in the introduction but does not provide specific details about the hardware used for conducting its experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions various models and techniques like GRU, VAE, GCN, GAT, and Set2Set but does not specify the version numbers of any software dependencies or libraries used for implementation (e.g., Python version, TensorFlow/PyTorch versions).
Experiment Setup No The paper describes its model architecture and loss function but does not provide specific experimental setup details such as learning rates, batch sizes, number of training epochs, or optimizer configurations.