On the Relationship Between Relevance and Conflict in Online Social Link Recommendations

Authors: Yanbang Wang, Jon Kleinberg

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The gap is measured on real-world data, based on instantiations of relevance defined by 13 link recommendation algorithms. We find that some, but not all, of the more accurate algorithms actually lead to better reduction of conflict. This section discusses how we can measure the two features degree of alignment on real-world data.
Researcher Affiliation Academia Yanbang Wang Department of Computer Science Cornell University ywangdr@cs.cornell.edu Jon Kleinberg Department of Computer Science Cornell University kleinberg@cornell.edu
Pseudocode No Baselines. We test three groups of 13 different link recommendation methods.
Open Source Code Yes Our code and data can be downloaded from https://github.com/Abel0828/NeurIPS23-Conflict-Relevance-in-FJ.
Open Datasets Yes We use two real-world datasets: Reddit and Twitter, collected by [39], one of the pioneering works that conduct empirical studies on the FJ model.
Dataset Splits No For each dataset, we randomly sample β = 100 positive links from the edge set, and β η negative links from all disconnected node pairs; the negative sampling rate η ∈ [1,10] is a hyperparameter. The positive links are then removed from the network to be reserved for testing, together with the negative links.
Hardware Specification Yes We run all experiments on Intel Xeon Gold 6254 CPU@3.15GHz with 1.6TB Memory.
Software Dependencies No for node2vec, we use 64 as node embedding dimensions, context window = 10,p = 2,q = 0.5; for GCN and R-GCN, we use two layers with 32 as hidden dimensions, followed by a 2-layer MLP with 16 hidden units; for Super GAT, we use two layers with 64 as hidden dimensions; for Graph Transformer, we use two layers with 32 as hidden dimensions, and attention head number of 4.
Experiment Setup Yes for node2vec, we use 64 as node embedding dimensions, context window = 10,p = 2,q = 0.5; for GCN and R-GCN, we use two layers with 32 as hidden dimensions, followed by a 2-layer MLP with 16 hidden units; for Super GAT, we use two layers with 64 as hidden dimensions; for Graph Transformer, we use two layers with 32 as hidden dimensions, and attention head number of 4. We use alpha=0.5 for Katz; alpha=0.85 for Personalized Page Rank;