Learning Efficient Truthful Mechanisms for Trading Networks
Authors: Takayuki Osogami, Segev Wasserkrug, Elisheva S. Shamash
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that the proposed approach successfully finds the mechanisms with the relaxed properties for trading networks where achieving ex post properties is impossible. [...] Our contribution is an automated mechanism design (AMD; [Conitzer and Sandholm, 2002]) approach for computing or learning desirable mechanisms for trading networks (Section 4-5). This consists of a) formulating an LP whose solution gives a Groves mechanism that satisfies all of the four desirable properties for a trading network with a given distribution of types, b) integrating machine learning techniques into the LP to extend the cases where such mechanisms can be designed, and c) reducing the number of variables in LP or the dimension of the feature vectors in the corresponding learning approach based on special Groves mechanisms. Furthermore, we provide empirical analysis to support the effectiveness of our AMD approaches (Section 6). [...] To show the relevance of the proposed approaches, we conduct experiments on a large number of trading networks where it is impossible to achieve all of the four desirable properties ex post. |
| Researcher Affiliation | Collaboration | Takayuki Osogami1 , Segev Wasserkrug1 , Elisheva S. Shamash2 1IBM Research 2Technion Israel Institute of Technology |
| Pseudocode | No | The paper describes algorithms and formulations (e.g., as LPs) but does not provide pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | For reproducibility, the source code is open-sourced at https://github.com/IBM/mechanism-learning. |
| Open Datasets | No | The paper states, 'we generate 1,642 non-trivial instances of trading networks uniformly at random' and 'take varying number of samples from the distribution of types'. It does not refer to a publicly available dataset with concrete access information or citations. |
| Dataset Splits | No | The paper states, 'we generate 1,642 non-trivial instances of trading networks uniformly at random'. It discusses taking 'varying number of samples from the distribution of types' for learning, but it does not specify train/validation/test splits, percentages, or absolute sample counts for data partitioning. |
| Hardware Specification | Yes | We run all experiments on a workstation having 64 GB memory and 4.0 GHz CPU. |
| Software Dependencies | Yes | Here, we solve the optimization problem in (13)-(15) via CPLEX 22.1. |
| Experiment Setup | No | The paper does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training settings for the learning approach beyond using a Gaussian process regressor and the augmented Lagrangian method. It refers to 'lower confidence bound' and 'upper confidence bound' but does not specify their values or how they are set. |