Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Fair Division via the Cake-Cutting Share

Authors: Yannan Bai, Kamesh Munagala, Yiheng Shen, Ian Zhang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We finally present empirical results showing that our definitions lead to more reasonable shares than the standard fair share notion of proportionality. ... We compute the approximations on simulated and real problem instances in Section 5 and show that the cake-cutting share the strictest envy polyhedron comes closest to having an approximation factor of one, meaning these shares are realizable via an allocation.
Researcher Affiliation Academia Yannan Bai1, Kamesh Munagala1, Yiheng Shen1, Ian Zhang1 1Duke University, Durham NC 27708-0129. EMAIL
Pseudocode No The paper describes algorithms through mathematical formulations (e.g., Linear Programs in Eq 1, 3, 5) and logical steps, but it does not include explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured procedures in a code-like format.
Open Source Code Yes Code and dataset https://github.com/izhang05/cake-cutting-share
Open Datasets Yes Code and dataset https://github.com/izhang05/cake-cutting-share ... The real data uses Yahoo A1 dataset (Yahoo 2003) which contains bid information for advertisers (agents) on a set of ads (items). ... Yahoo. 2003. A1 Dataset. https://webscope.sandbox.yahoo. com/catalog.php?datatype=a. Accessed: 2021-05-25.
Dataset Splits No The paper describes the generation of instances for simulation and the selection of data from the Yahoo A1 dataset, but it does not specify explicit training, validation, or test splits in the context of machine learning. For example, it states 'We randomly generate 200 fair division instances' and 'We generate 200 instances by selecting a random set of m = 20 ads'.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or computational resources) used for running the experiments or simulations.
Software Dependencies No The paper mentions the use of Linear Programs (LPs) for computation but does not specify any particular software, libraries, or solvers, along with their version numbers, that were used to implement or solve these LPs.
Experiment Setup Yes In the simulated model, we set n = 25 and m = 3n = 75 for all instances. We randomly generate 200 fair division instances and compute the approximation guarantee given by each fairness notion using the LP in Eq (5). We use the setting in Caragiannis et al. (2019) and set vi to be a uniformly random integer m-partition that sums to 1000. ... For computing EFS , we use the same setup as before. For each instance, we sample 20 sets Wi of size n 1 to estimate the expectation in Definition 10.