TemPEST: Soft Template-Based Personalized EDM Subject Generation through Collaborative Summarization

Authors: Yu-Hsiu Chen, Pin-Yu Chen, Hong-Han Shuai, Wen-Chih Peng7538-7545

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results indicate that Tem PEST is able to generate personalized topics and also effectively perform recommending rating reconstruction.
Researcher Affiliation Academia Yu-Hsiu Chen, Pin-Yu Chen, Hong-Han Shuai, Wen-Chih Peng National Chiao Tung University, Hsinchu, Taiwan {yhchen.cm06g, pinyu.eed04, hhshuai}@nctu.edu.tw, wcpeng@g2.nctu.edu.tw
Pseudocode No The paper describes the proposed model architecture and processes in prose and with diagrams (Figure 1, Figure 2) but does not include any explicitly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes More details of dataset and case studies are shown in the supplementary material.5 https://github.com/yhchen2/TemPEST
Open Datasets Yes Since there is no public dataset for personalized subject summarization, we collect a new one named KKday from the well-known travel experience e-commerce platform in Asia that sales tour packages.4 The raw data contains the paired tuple (article, subject) of tour package DMs, together with the ID list of users who click the subject. [...] More details of dataset and case studies are shown in the supplementary material.5 https://github.com/yhchen2/TemPEST
Dataset Splits Yes For summarization task, we randomly split the dataset into 14095 products for training, 1761 for testing and 1761 for validation.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'The Word2Vec is implemented with gensim and pretrained on the latest Chinese Wikipedia' and 'We use two-layer Bi LSTM for both TSE and USE network'. While 'gensim' is a library, no version number is provided for it or any other specific software component used in their implementation.
Experiment Setup Yes We use two-layer Bi LSTM for both TSE and USE network, and the hidden state size of 500. The learning rate and dropout rate are set to 0.001 and 0.3 respectively.