Train 'n Trade: Foundations of Parameter Markets

Authors: Tzu-Heng Huang, Harit Vishwakarma, Frederic Sala

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate it in both theoretical and empirical settings. Theoretically, in basic scenarios we show how agent training converges faster through purchasing parameters in the market. We offer bounds on the improvement gained via trading when training linear models. Empirically, we conduct experiments in a variety of practical scenarios to validate the framework s effectiveness.
Researcher Affiliation Academia Tzu-Heng Huang, Harit Vishwakarma, Frederic Sala Department of Computer Science University of Wisconsin-Madison {thuang273, hvishwakarma}@wisc.edu, fredsala@cs.wisc.edu
Pseudocode Yes Algorithm 1 Single Round of Parameter Trading
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We use MNIST [33], CIFAR10 [34], and Tiny Image Net [35] for training MLPs and Res Net20 [36].
Dataset Splits No The paper mentions a validation dataset for the broker and describes data endowments for agents, but it does not specify explicit training/validation/test splits (e.g., percentages or sample counts) for the datasets used in the experiments (MNIST, CIFAR10, Tiny Image Net) that would be needed for reproducibility.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or library versions) required to replicate the experiments.
Experiment Setup Yes Models are trained from different random initializations and batch orders over 60 epochs. Agents trade entire parameter sets and join the market after five epochs.