Collaborative Machine Learning with Incentive-Aware Model Rewards
Authors: Rachael Hwee Ling Sim, Yehong Zhang, Mun Choon Chan, Bryan Kian Hsiang Low
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section empirically evaluates the performance and properties of our reward scheme (Sec. 4.2) on Bayesian regression models with the (a) synthetic Friedman dataset with 6 input features (Friedman, 1991), (b) diabetes progression (Dia P) dataset on the diabetes progression of 442 patients with 9 input features (Efron et al., 2004), and (c) Californian housing (Cali H) dataset on the value of 20640 houses with 8 input features (Pace & Barry, 1997). |
| Researcher Affiliation | Academia | 1Department of Computer Science, National University of Singapore, Republic of Singapore. Correspondence to: Bryan Kian Hsiang Low <lowkh@comp.nus.edu.sg>. |
| Pseudocode | No | The paper describes its proposed methods and scheme in prose and mathematical formulations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | This section empirically evaluates the performance and properties of our reward scheme (Sec. 4.2) on Bayesian regression models with the (a) synthetic Friedman dataset with 6 input features (Friedman, 1991), (b) diabetes progression (Dia P) dataset on the diabetes progression of 442 patients with 9 input features (Efron et al., 2004), and (c) Californian housing (Cali H) dataset on the value of 20640 houses with 8 input features (Pace & Barry, 1997). |
| Dataset Splits | No | The paper details how the training data is partitioned among parties, but it does not specify a separate validation dataset or its split percentages/counts for model tuning or evaluation. The evaluation is done on a test dataset. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'Bayesian regression models' and specific kernels ('squared exponential kernel + exponential kernel'), but it does not list any specific software libraries or their version numbers (e.g., 'Python 3.x', 'PyTorch 1.x', 'scikit-learn 0.x'). |
| Experiment Setup | No | The paper describes aspects of the experimental setup such as the type of models used (e.g., 'Bayesian regression models', 'GP regression'), how data was partitioned, and how noise was injected. However, it does not provide specific details on hyperparameters (e.g., learning rate, batch size, number of epochs) or specific optimizer settings typically found in an 'experimental setup' section. |