Fair and Welfare-Efficient Constrained Multi-Matchings under Uncertainty

Authors: Elita Lobo, Justin Payan, Cyrus Cousins, Yair Zick

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare these optimization approaches empirically in Section 5 on reviewer assignment data from AAMAS 2015, 2016, and 2021. and 5 Experiments We run experiments on three reviewer assignment datasets.
Researcher Affiliation Academia Elita Lobo , Justin Payan , Cyrus Cousins, and Yair Zick University of Massachusetts Amherst {elobo, jpayan, cbcousins, yzick}@umass.edu
Pseudocode No The paper describes various algorithms (e.g., Iterated QP, Projected SGA) in text, but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes All code is available at https://github.com/justinpayan/RAU2.
Open Datasets Yes We run experiments on three reviewer assignment datasets. The datasets contain bids from the International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2015, 2016, and 2021 [41, 42].
Dataset Splits No The paper mentions setting aside a "test set" for binarized bids in Appendix E and "All results are averaged over 5 subsampling runs 20% of each dataset" in Section 5, but does not specify a full train/validation/test split with explicit percentages or sample counts for the models trained.
Hardware Specification Yes All experiments were run on Xeon E5-2680 v4 @ 2.40GHz machines with 128GB RAM with each experiment consuming at most 32 GB of memory.
Software Dependencies Yes When the valuation uncertainty set is polyhedral, the problem in (3) simplifies further into a linear program (LP) which can be solved efficiently using standard LP solvers like Gurobi [28].
Experiment Setup Yes We optimize and evaluate CVa R0.01; we take 4, 000 samples from the distribution to optimize for CVa R using the sampling-based approach, and we take 10, 000 samples to estimate CVa R for evaluation. and For each paper a N, we set κa = κa = 3 for all a in AAMAS 2015, and κa = κa = 2 for all a in AAMAS 2016 and 2021. For each reviewer i, we set ψi = 0 and ψi = 15 for 2015 and 2016 and 4 for 2021.