Phrase-Based Presentation Slides Generation for Academic Papers
Authors: Sida Wang, Xiaojun Wan, Shikang Du
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluation results on a real dataset verify the efficacy of our proposed approach. |
| Researcher Affiliation | Academia | Institute of Computer Science and Technology, The MOE Key Laboratory of Computational Linguistics Peking University, Beijing 100871, China |
| Pseudocode | Yes | The details of the algorithm is illustrated in Algorithm 1. The details of the algorithm is illustrated in Algorithm 2. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | No | We randomly collected 175 pairs of paper and slides in the computer science field in the same way as in (Hu and Wan 2013). This describes their data collection method but does not provide access to their collected dataset or refer to a standard publicly available dataset with concrete access information. |
| Dataset Splits | Yes | In our experiments, 100 pairs of paper and slides are used for training, 25 for validation and 50 for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The Scikit-learn toolkit2 is used and the probability of prediction is acquired through the API function of predict_proba. While a software toolkit is mentioned, no specific version number is provided for scikit-learn or any other library. |
| Experiment Setup | No | The paper describes general aspects of its methods (e.g., random forest classifier, probability thresholds) but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, epochs for model training) or detailed training configurations. |