Sequence-to-Set Generative Models

Authors: Longtao Tang, Ying Zhou, Yu Yang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments using two e-commerce datasets TMALL and HKTVMALL, and report the experimental results in Section 6. The experimental results clearly show the superiority of our models to the baselines and the effectiveness of our size-bias trick.
Researcher Affiliation Academia 1School of Data Science, City University of Hong Kong, Hong Kong, China 2Department of Economics and Finance, City University of Hong Kong, Hong Kong, China 3Hong Kong Institute for Data Science, City University of Hong Kong, Hong Kong, China longttang2-c@my.cityu.edu.hk, {ying.zhou, yuyang}@cityu.edu.hk
Pseudocode No The paper describes methods like importance sampling and an EM perspective, but it does not include explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code and data used in our experiments can be found at https://github.com/LongtaoTang/SetLearning.
Open Datasets Yes We did our empirical study on two real-world datasets of customer orders from online e-commerce platforms. This first dataset is TMALL (https://www.tmall.com)... The second dataset is HKTVMALL (https://www.hktvmall.com)... The datasets can be found in the source code.
Dataset Splits No The paper states "We split all the orders in a month to a training dataset Strain and a testing dataset Stest," but does not explicitly mention a validation set split or its size.
Hardware Specification Yes All the experiments were ran on a CPU with 10 cores.
Software Dependencies No The paper mentions "PyTorch" as the framework used for the optimizer but does not specify a version number.
Experiment Setup Yes We set the embedding dimension of all methods as 10. For all experiments, the MLP of Set NN only has one hidden layer which contains 50 neural units. The optimizer used by us is RMSProp with default parameter in PyTorch.