Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Social Recommendation via Graph-Level Counterfactual Augmentation

Authors: Yinxuan Huang, Ke Liang, Yanyi Huang, Xiang Zeng, Kai Chen, Bin Zhou

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the promising capacity of our model from five aspects: superiority, effectiveness, transferability, complexity, and robustness. We conducted experiments using two widely-used real-world datasets: Ciao and Epinions. For evaluation, we employ Hit Ratio HR@N and Normalized Discounted Cumulative Gain (NDCG)@N as metrics.
Researcher Affiliation Academia 1College of Computer Science and Technology National University of Defense Technology Changsha, Hunan, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1: GCA Algorithm
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository. It mentions implementation in PyTorch but no availability.
Open Datasets Yes We conducted experiments using two widely-used real-world datasets: Ciao and Epinions. Detailed statistics for these datasets are presented in Tab. 2.
Dataset Splits No The paper mentions dividing users into groups based on interaction levels for robustness evaluation, but it does not specify train/test/validation dataset splits (e.g., percentages, counts, or explicit methodology for partitioning the primary datasets for model training and evaluation).
Hardware Specification No The paper states the model was implemented in PyTorch and optimized with Adam, but it does not specify any hardware details such as GPU models, CPU types, or memory used for experiments.
Software Dependencies No The paper mentions that the model was implemented in PyTorch, but it does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes The learning rate was tuned within [5e 4, 1e 3, 5e 3] with a 0.96 decay factor per epoch. Batch sizes were selected from [1024, 2048, 4096, 8192], and hidden dimensions from [64, 128, 256, 512]. The Graph-Level Counterfactual Augmentation module employed a GNN encoder and a 3-layer MLP decoder with a 64-dimensional hidden layer and ELU activation. The parameter γ was set according to the γpct-percentile of node embedding distances for each dataset. The optimal number of GNN layers was chosen from [1, 2, 3, 4]. Regularization weights λ was selected from [1e 3, 1e 2, 1e 1, 1e0, 1e1].