Inferring Private Valuations from Behavioral Data in Bilateral Sequential Bargaining

Authors: Lvye Cui, Haoran Yu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real bargaining data show that our inference approach outperforms baselines.
Researcher Affiliation Academia School of Computer Science & Technology, Beijing Institute of Technology
Pseudocode Yes Algorithm 1 Homogeneous Behavior Learning Algorithm; Algorithm 2 K-Loss Clustering Algorithm
Open Source Code Yes The source code and data are available at: https://github.com/cuilvye/Bargaining-project.
Open Datasets Yes We also conduct experiments on a large dataset collected from e Bay s Best Offer platform [Backus et al., 2020].
Dataset Splits Yes For both synthetic data and real data, we randomly select 80% of all threads for training, 10% for validation, and 10% for testing.
Hardware Specification No The paper mentions training models and using a GRU but does not specify any hardware details such as GPU/CPU models, memory, or specific computing environments used for experiments.
Software Dependencies No The paper mentions using a 'gated recurrent unit (GRU)' and the 'Adam optimizer' but does not specify software versions for these or any other libraries/frameworks (e.g., TensorFlow, PyTorch, Python versions).
Experiment Setup Yes The Adam optimizer with a learning rate of 0.001 is applied for our network training. The epoch num-ber T is set to 500 with a batch size of 64, and the weight factor α is set to 0.6.