Fair Distribution of Delivery Orders

Authors: Hadi Hosseini, Shivika Narang, Tomasz Wąs

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 Experiments We now present our experimental results concerning the existence of EF1 and PO allocations and investigate the efficiency loss of fair solutions through their price of fairness. We also check the running time of our algorithm, which we report in the full version of the paper [Hosseini et al., 2023a]. In each experiment, we generated trees, uniformly at random, based on Prüfer sequences [Prüfer, 1918] using Network X Python library [Hagberg et al., 2008]. For each experiment and a graph size, we sampled 1,000 trees.
Researcher Affiliation Academia 1Pennsylvania State University 2University of New South Wales 3University of Oxford
Pseudocode Yes Algorithm 1 Find Pareto Frontier(n, G,h)
Open Source Code Yes The code for our experiments is available at: https://doi.org/10.5281/zenodo.11149658.
Open Datasets No The paper states, 'In each experiment, we generated trees, uniformly at random, based on Prüfer sequences [Prüfer, 1918] using Network X Python library [Hagberg et al., 2008].' This indicates the use of generated data rather than a pre-existing publicly available dataset with concrete access information.
Dataset Splits No The paper describes generating trees for experiments but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation setup).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or cloud resources) used to run the experiments.
Software Dependencies No The paper mentions using 'Network X Python library' and Python, but does not provide specific version numbers for these software dependencies (e.g., 'Network X 2.5' or 'Python 3.8').
Experiment Setup No The paper describes how trees were generated and sampled for experiments, and the parameters varied (graph size, number of agents), but it does not provide specific experimental setup details such as hyperparameters, learning rates, batch sizes, optimizer settings, or model initialization.