Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Trade-off between Payoff and Model Rewards in Shapley-Fair Collaborative Machine Learning
Authors: Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Sec. 5, we empirically demonstrate sensible characteristics of our proposed allocation scheme in several ML problems. This section empirically illustrates the proposed allocation scheme of both payoff and model rewards in collaborative ML. |
| Researcher Affiliation | Academia | Institute of Data Science, National University of Singapore, Republic of Singapore Dept. of Computer Science, National University of Singapore, Republic of Singapore Dept. of Electrical Engineering and Computer Science, MIT, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | https: //github.com/qphong/model-payoff-allocation |
| Open Datasets | Yes | To easily interpret the results, we choose the MNIST dataset [10] which contains 70, 000 images of handwritten digits. Additional experiments on the CIFAR-10 [9] dataset and the IMDB movie reviews dataset [11] dataset are in App. J. |
| Dataset Splits | No | In the experiments (in Sec. 5), the model performance is measured by the prediction accuracy of models on a validation set which is common across different parties. (The paper mentions a validation set but does not provide specific details about its size or split percentage.) |
| Hardware Specification | No | The paper states 'Yes' to the question 'Did you include the total amount of compute and the type of resources used?' in its self-assessment checklist. However, the provided text of the paper does not contain explicit details about the specific hardware (e.g., GPU/CPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers in the main text. |
| Experiment Setup | No | The paper describes the dataset partitioning and the use of a validation set but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs) or optimizer settings in the main text. |