Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization
Authors: Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng
IJCAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on AMI and ICSI meeting datasets show that our full system can achieve SOTA performance |
| Researcher Affiliation | Academia | Xiachong Feng , Xiaocheng Feng , Bing Qin , Xinwei Geng Harbin Institute of Technology, China EMAIL |
| Pseudocode | No | The paper describes its model architecture using diagrams and mathematical equations, but it does not include a distinct 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our codes and outputs are available at: https://github.com/xcfcode/DDAMS/. |
| Open Datasets | Yes | We experiment on AMI [Carletta et al., 2005] and ICSI [Janin et al., 2003] datasets. |
| Dataset Splits | Yes | We preprocess the dataset into train, valid and test sets for AMI (97/20/20) and ICSI (53/25/6) separately following Shang et al. [2018]. |
| Hardware Specification | No | The paper discusses model architecture and training but does not specify any particular hardware (e.g., GPU models, CPU types) used for experiments. |
| Software Dependencies | No | The paper mentions using a 'SOTA dialogue discourse parser', 'Bi LSTM', and 'Py Rouge package' but does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | No | The paper describes the model architecture and training objective but does not provide specific experimental setup details such as learning rates, batch sizes, or optimizer settings. |