Towards Topic-Aware Slide Generation For Academic Papers With Unsupervised Mutual Learning

Authors: Da-Wei Li, Danqing Huang, Tingting Ma, Chin-Yew Lin13243-13251

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation results on a labeled test set show that our model can extract more relevant sentences than baseline methods. Human evaluation also shows slides generated by our model can serve as a good basis for preparing the final presentations.
Researcher Affiliation Collaboration 1 School of Software and Microelectronics, Peking University 2 Microsoft Research Asia 3 Harbin Institute of Technology
Pseudocode Yes Algorithm 1 Training paradigm based on mutual learning
Open Source Code Yes 2Our annotation and code can be found at https://github.com/daviddwlee84/Topic Aware Paper Slide Generation
Open Datasets Yes We use the ACL Anthology Reference Corpus (Bird et al. 2008) as the unlabeled corpus of papers for our unsupervised learning.
Dataset Splits No The paper describes the creation of a 'labeled test set' by annotating 100 papers, but does not specify train/validation/test splits for the model's training, nor does it explicitly mention a separate validation set.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions software components and models like GRU, GloVe, Adam, and BERT-QA, but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes The word embedding matrix was initialized using pre-trained 50-dimension Glo Ve vectors... We use Adam (Kingma and Ba 2015) as our optimizing algorithm. The learning rate for Adam optimizer α is set to 0.001. We use dropout (Srivastava et al. 2014) as regularization with probability p = 0.3 after the sentence level encoder and p = 0.2 after the document level encoder. The training process stops when the loss of two classifiers converges. Maximum training epochs are set to 20.