DeBayes: a Bayesian Method for Debiasing Network Embeddings

Authors: Maarten Buyl, Tijl De Bie

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics such as demographic parity and equalized opportunity. ... Section 5 empirically confirms De Bayes superiority when compared with state-of-the-art baselines. ... We ran our evaluation pipeline for 10 runs, with different random seeds and train/test splits.
Researcher Affiliation Academia Maarten Buyl 1 Tijl De Bie 1 1Department of Electronics and Information Systems (ELIS), IDLab, Ghent University, Ghent, Belgium.
Pseudocode No The paper does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes Evaluation was done on two datasets, detailed in Section 5.1. ... DBLP The DBLP co-authorship network (Tang et al., 2008) is constructed from DBLP, a computer science bibliography database. ... Movielens-100k The Movielens-100k (ML-100k) dataset is a staple in recommender systems research due to its manageable size and rich data. ... The dataset names link to the URL where they can be downloaded.
Dataset Splits Yes We ran our evaluation pipeline for 10 runs, with different random seeds and train/test splits. The training set always contained approximately 80% of the edges, with the test set containing the remaining 20% and an equal amount of non-edges. ... For that optimization, 20% of the training set was used as a validation set.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies Yes In our experiments, we used the roc auc score implementation from the scikit-learn 0.22.1 library.
Experiment Setup Yes For dimensionality, d = 8 was chosen out of d {8, 16}. The parameter σ2 was kept constant, and σ1 = 0.7 was chosen from [0.4, 0.9]. ... The strength of the adversarial term in the loss function is specified by the parameter λ. We trained models with λ [0, 5, 25, 100]. ... The learning rate parameter was evaluated over the range [0.0001, 0.01]. For DBLP, a learning rate of 0.01 was used; for ML-100k, it was left at its default value of 0.001.