Document Summarization with VHTM: Variational Hierarchical Topic-Aware Mechanism

Authors: Xiyan Fu, Jun Wang, Jinghan Zhang, Jinmao Wei, Zhenglu Yang7740-7747

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments validate the superior performance of VHTM compared with the baselines, accompanying with semantically consistent topics. and Experiments, Dataset, Experimental Settings, Evaluation, Baseline, Results Quantitative Analysis, Ablation Study.
Researcher Affiliation Academia Xiyan Fu,1 Jun Wang,2 Jinghan Zhang,1 Jinmao Wei,1 Zhenglu Yang1 1College of Computer Science, Nankai University, China 2Ludong University, China
Pseudocode No The paper describes its model and methods in prose and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement regarding the release of source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes We choose the CNN/Daily Mail corpus as the benchmark dataset.The CNN/Daily Mail dataset comprises online news documents (761 tokens on average) paired with multi-sentence summaries (46 tokens on average). For training and testing efficiency, we use the script of (See, Liu, and Manning 2017), which contains 287,226 training pairs, 13,368 validation pairs, and 11,490 test pairs.
Dataset Splits Yes For training and testing efficiency, we use the script of (See, Liu, and Manning 2017), which contains 287,226 training pairs, 13,368 validation pairs, and 11,490 test pairs.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using BERT but does not provide specific version numbers for BERT or any other software dependencies, such as programming languages or libraries like TensorFlow or PyTorch.
Experiment Setup Yes For the hierarchical topic-aware mechanism of VHTM, we set the topic number K to 50, the dimension of topic representation to 200, and the scale of topic vocabulary to 20,000. Both topic embedding and topic attention share the same topic-related parameters. The dimension of f( ), which represents a three-layer feed-forward neural network, is set equal to the topic dimension. Meanwhile, we set the paragraphs number to 3 for the topic attention mechanism of VHTM. Because of the dimension of the word dense vector obtained from BERT is 768, we set the same dimension for the topic representation of each document for better fusion. In terms of the basic framework, we follow the settings in (See, Liu, and Manning 2017).