Strategic Data Sharing between Competitors

Authors: Nikita Tsoy, Nikola Konstantinov

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For multiple firms, we conduct simulation studies that reveal similar trends.We use the procedure described above to empirically test the conclusions of the previous sections.
Researcher Affiliation Academia Nikita Tsoy INSAIT, Sofia University Sofia, Bulgaria nikita.tsoy@insait.ai Nikola Konstantinov INSAIT, Sofia University Sofia, Bulgaria nikola.konstantinov@insait.ai
Pseudocode No The paper describes procedures verbally (e.g., 'standard backward induction procedure') but does not present any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing open-source code for the described methodology or a link to a code repository.
Open Datasets No The paper describes generating synthetic dataset sizes from a normal distribution ('We sample m dataset sizes, one for each firm, from a distribution P = N(µ, σ2) clipped at 1 form below.') rather than using or providing access to a pre-existing public dataset.
Dataset Splits No The paper mentions 'train' and 'test' in the context of general machine learning concepts (e.g., 'train a machine learning model') but does not specify any training/validation/test dataset splits used in their own simulations or analysis.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, or cloud instances) used for running its simulations or experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in their simulations.
Experiment Setup Yes We repeat the experiment 10000 times, for fixed values of m, γ, β, µ, σ and compute the mean of the average coalition size over these runs. Our simulation solves each instance of the data-sharing game exactly and average it over a big number of independent runs, which makes our results very precise. When varying one of these parameters, the default values for the other one is γ = 0.8 and β = 0.9.