Decongestion by Representation: Learning to Improve Economic Welfare in Marketplaces
Authors: Omer Nahum, Gali Noti, David C. Parkes, Nir Rosenfeld
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We end with an extensive set of experiments that shed light on our proposed setting and learning approach. We first make use of synthetic data to empirically explore our setting and approach. We then use real data of user ratings to elicit user preferences across a set of diverse items. Coupling this with simulated user behavior, we demonstrate the susceptibility of naïve prediction-based methods to harmful congestion, and the ability of our congestion-aware representation learning framework to improve economic outcomes. |
| Researcher Affiliation | Academia | Omer Nahum Technion DDS Gali Noti Cornell CS David C. Parkes Harvard SEAS Nir Rosenfeld Technion CS |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks with explicit labels like 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Code for all experiments can be found at: https://github.com/omer6nahum/Decongestion-by-Representation. |
| Open Datasets | Yes | We use the Movielens-100k dataset [11], which contains 100,000 movie ratings from 1,000 users and for 1,700 movies, and is publicly-available. |
| Dataset Splits | Yes | Given S, we use a 6-fold split to form different partitions into train test sets. |
| Hardware Specification | Yes | All experiments were run on a Linux machine wih AMD EPYC 7713 64-Core processors. For speedup runs were parallelized each across 4 CPUs. |
| Software Dependencies | Yes | All code is written in python. All methods and baselines are implemented and trained with Tensorflow 2.11 and using Keras. CE prices were computed using the convex programming package cvxpy. |
| Experiment Setup | Yes | For our method of decongestion by representation (Db R), we optimize Eq. (8) using Adam [16] with 0.01 learning rate and for a fixed number of 300 epochs. For training we used cross entropy loss as the objective. For optimization we Adam for 150 epochs, with learning rate of 1e 3 and batch size of 20. Optimization was carried out using the Adam optimizer for 300 epochs (at which learning converged for most cases) and with a learning rate of 1e 2. We set N = 20, and use temperatures τGumbel = 2 for the Gumbel softmax, τtop-k = 0.2 for the relaxed top-k, and τf = 0.01 for the softmax in the pre-trained predictive model f. |