Open Domain Event Text Generation

Authors: Zihao Fu, Lidong Bing, Wai Lam7748-7755

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply the proposed model on the Wiki Event dataset and compare it with a few baselines. The experimental results show that our carefully-designed architecture does help generate better event text, and extensive analysis further uncovers the characteristics of the proposed task.
Researcher Affiliation Collaboration Zihao Fu,1 Lidong Bing,2 Wai Lam1 1Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong 2DAMO Academy, Alibaba Group
Pseudocode No The paper describes the components and their functions using text and mathematical equations, but it does not include a dedicated pseudocode block or algorithm listing.
Open Source Code Yes We build a new dataset called Wiki Event1 that provides 34K pairs of entity chain and its corresponding description sentences. 1Available at https://github.com/fuzihaofzh/Wiki Event
Open Datasets Yes We build a new dataset called Wiki Event1 that provides 34K pairs of entity chain and its corresponding description sentences. 1Available at https://github.com/fuzihaofzh/Wiki Event
Dataset Splits Yes We finally obtain a set of 34,000 (entity chain, event text) pairs, and split it into train, development (dev), and test sets. The statistics of the final dataset is shown in Table 1.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using NLTK and Elasticsearch but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We set both the embedding size and the hidden size to 500, the dropout rate to 0.3, the optimization algorithm to SGD with the initial learning rate of 1.0 and the decay rate of 0.5. The random drop ratio is set to 0.5 and the retrieved sentence number is set to 1 for each entity.