Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning
Authors: Renqiang Luo, Huafei Huang, Ivan Lee, Chengpei Xu, Jianzhong Qi, Feng Xia
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical evaluations on six realworld datasets validate the superior performance of Fair GP in achieving fairness compared to state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Dalian University of Technology 2University of South Australia 3The University of New South Wales 4The University of Melbourne 5RMIT University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical formulas (e.g., equations 1-17) but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In our experiment, the task is node classification, tested on four datasets: Credit, Pokec-z, Pokec-n, and AMiner L. Specifically, the datasets are labelled as Pokec-z-R and Pokec-n-R when living region is the sensitive feature, and Pokec-z-G and Pokec-n-G when gender is the sensitive feature. More details are shown in Appendix D. |
| Dataset Splits | No | The paper states that "Both SP and EO are evaluated on the test set" and refers to "Appendix A.1" for experimental settings, but the provided text does not explicitly detail the training, validation, and test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions that "Fair GT" runs "OOM" (Out Of Memory) for some datasets and provides "Training Cost Comparison" in seconds, but it does not specify any particular hardware details such as GPU models, CPU types, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper mentions 'NumPy in Python' for notation and 'METIS' for graph partitioning but does not provide specific version numbers for these or any other software dependencies used in the experiments. |
| Experiment Setup | Yes | For a fair comparison, we standardize key parameters across all methods, setting the number of hidden dimensions to 128, the number of layers to 1, and the number of heads to 1. In addition, all models are evaluated with a runtime of 100 epochs, and the units are in seconds. |