Optimal Mechanism in a Dynamic Stochastic Knapsack Environment

Authors: Jihyeok Jung, Chan-Oi Song, Deok-Joo Lee, Kiho Yoon

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, we propose two algorithms to approximate the proposed mathematical solution by implementing Monte Carlo (MC) simulation-based regression method and reinforcement learning. We compare and analyze the performance of these algorithms to evaluate their effectiveness. To compare the effectiveness of the two proposed approximation algorithms, we conduct numerical experiments under various scenarios, especially the length of the study period.
Researcher Affiliation Academia Jihyeok Jung1, Chan-Oi Song2, Deok-Joo Lee1*, Kiho Yoon2 1Department of Industrial Engineering, Seoul National University 2Department of Economics, Korea University
Pseudocode Yes Algorithm 1: Monte Carlo Simulation and Algorithm 2: Deep Deterministic Policy Gradient method
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology or a link to a code repository.
Open Datasets No The paper uses simulated data based on specified probability distributions (e.g., 'f(q) Uniform(0, 2)', 'f(v|q) Exponential(q)') as described in Table 1, rather than external public datasets. Therefore, no concrete access information for a publicly available dataset is provided.
Dataset Splits No The paper states that 'All the methods are trained by 10,000 episodes and the performances are compared by averaging 20 test episodes,' which describes training and testing episodes but does not provide specific train/validation/test dataset splits or percentages for the data within those episodes.
Hardware Specification Yes The simulations were performed on a computer with an Intel Core i7-6700 CPU, 16GB RAM, and an NVIDIA Ge Force GTX 1060 GPU, using Python numpy and Pytorch packages, with a fixed seed number of 1.
Software Dependencies No The paper mentions 'Python numpy and Pytorch packages' but does not specify their version numbers.
Experiment Setup Yes In the context of DDPG, we set the learning rates for the actor, critic, and soft update to 0.0001, 0.001, and 0.0001 respectively. A minibatch size of 64 is utilized, and the neural network structure encompasses 3 layers, each with 64 nodes. Also, the initial random choices are set to be 10% of the total training episodes.