FairWire: Fair Graph Generation

Authors: Oyku Kose, Yanning Shen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real-world networks validate that the proposed tools herein deliver effective structural bias mitigation for both real and synthetic graphs.
Researcher Affiliation Academia O. Deniz Kose Department of Electrical Engineering and Computer Science University of California Irvine Irvine, CA, USA okose@uci.edu Yanning Shen Department of Electrical Engineering and Computer Science University of California Irvine Irvine, CA, USA yannings@uci.edu
Pseudocode No The paper describes algorithms and processes verbally and with mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Codes to reproduce all the results in Section 6 are provided in the supplementary material to this submission.
Open Datasets Yes In the experiments, four attributed networks are employed, namely Cora, Citeseer, Amazon Photo and Amazon Computer for link prediction. Cora and Citeseer are widely utilized citation networks, where the articles are nodes and the network topology depicts the citation relationships between these articles (54). Amazon Photo and Amazon Computer are product co-purchase networks, where the nodes are the products and the links are created if two products are often bought together (55). In addition to link prediction, we also evaluate the synthetic graphs on node classification, where the German credit (56) and Pokec-n (9) graphs are employed.
Dataset Splits Yes For training, 80% of the edges are used, where the remaining edges are split equally into two for the validation and test sets.
Hardware Specification Yes Experiments are carried over on 4 NVIDIA RTX A4000 GPUs.
Software Dependencies No The paper mentions using the “Adam optimizer (65)” and “Glorot initialization (64)”, but it does not specify software dependencies like programming languages, libraries, or frameworks with their version numbers.
Experiment Setup Yes The learning rate, the dimension of hidden representations, and the dropout rate are selected via grid search for the proposed scheme and all baselines, where the value leading to the best validation set performance is selected. For learning rate the, the dimension of hidden representations, and the dropout rate, the corresponding hyperparameter spaces are {1e 1, 1e 2, 3e 3, 1e 3}, {32, 128, 512}, and {0.0, 0.1, 0.2}, respectively.