Robust Online Matching with User Arrival Distribution Drift

Authors: Yu-Hang Zhou, Chen Liang, Nan Li, Cheng Yang, Shenghuo Zhu, Rong Jin459-466

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a real-world dataset exhibit the superiority of our approach.
Researcher Affiliation Industry Yu-Hang Zhou, Chen Liang, Nan Li, Cheng Yang, Shenghuo Zhu, Rong Jin Alibaba Group, Hangzhou, China {zyh174606,liangchen.lc,nanli.ln,charis.yangc,shenghuo.zhu,jinrong.jr}@alibaba-inc.com
Pseudocode Yes Algorithm 1 One-Time Online Primal-Dual Algorithm and Algorithm 2 Robust Dynamic Learning Algorithm
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No The dataset used here consists of 245 bidders and millions of search queries from 7 days. Due to the consideration of trade secrets, all of the reported information about the dataset has been masked.
Dataset Splits No The paper defines ϵ as the 'fraction of users used for training dual variables' (e.g., fixed as 0.1), which implies a training set. It also mentions 'In practice, we could set this hyper-parameter via cross-validation' for δ, but does not explicitly describe a separate validation split or its percentage/counts for the reported experiments.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as CPU/GPU models or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers used for the experiments.
Experiment Setup Yes Parameters ϵ in RDLA, DLA-geometric, DLA-equal, and OT-PD are all fixed as 0.1, and λ in RDLA is set as 10. Furthermore, in order to see the influence of the choice of δ, we evaluate the proposed RDLA method with several δ setting, i.e., 1e 2, 1e 3, 1e 4.