Worst-Case VCG Redistribution Mechanism Design Based on the Lottery Ticket Hypothesis

Authors: Mingyu Guo

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The results are summarized in Table 1. The quality of training is naturally the gap between the achieved worst-case allocative efficiency ratio and the theoretical upper bound. We use WCT to denote our worst-case training algorithm.
Researcher Affiliation Academia School of Computer and Mathematical Sciences University of Adelaide, Australia mingyu.guo@adelaide.edu.au
Pseudocode Yes Algorithm 1: Worst-Case Training Algorithm
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No The paper does not use pre-existing public datasets; instead, it generates 'random type profiles' and 'worst-case type profiles' as training samples. No links or citations for publicly available datasets are provided.
Dataset Splits No The paper describes how training batches are composed of generated type profiles but does not specify train/validation/test splits of a fixed dataset or reference standard split methodologies for reproducibility.
Hardware Specification Yes The hardware allocated to each job is 1 CPU core from Intel Xeon Platinum 8360Y (for running MIPs) and 1 GPU core from Nvidia A100 (for neural network training).
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., specific libraries, frameworks, or solvers with their versions).
Experiment Setup Yes Run Adam SGD on h with learning rate 0.0001 for 500 epochs. Training batch consists of: 16 latest calculated worst-case type profiles (i.e., WCP[-16:]) 16 randomly sampled worst-case type profiles from earlier (i.e., from WCP[:-16]) 16 random type profiles n + 1 type profiles where the agents either report 1 n/2 or 0 (i.e., type profiles for deriving the conjectured upper bound (Naroditskiy et al. 2012))