Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization
Authors: Xiachong Feng, Xiaocheng Feng, Bing Qin, Xinwei Geng
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on AMI and ICSI meeting datasets show that our full system can achieve SOTA performance |
| Researcher Affiliation | Academia | Xiachong Feng , Xiaocheng Feng , Bing Qin , Xinwei Geng Harbin Institute of Technology, China {xiachongfeng, xcfeng, bqin, xwgeng}@ir.hit.edu.cn |
| Pseudocode | No | The paper describes its model architecture using diagrams and mathematical equations, but it does not include a distinct 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our codes and outputs are available at: https://github.com/xcfcode/DDAMS/. |
| Open Datasets | Yes | We experiment on AMI [Carletta et al., 2005] and ICSI [Janin et al., 2003] datasets. |
| Dataset Splits | Yes | We preprocess the dataset into train, valid and test sets for AMI (97/20/20) and ICSI (53/25/6) separately following Shang et al. [2018]. |
| Hardware Specification | No | The paper discusses model architecture and training but does not specify any particular hardware (e.g., GPU models, CPU types) used for experiments. |
| Software Dependencies | No | The paper mentions using a 'SOTA dialogue discourse parser', 'Bi LSTM', and 'Py Rouge package' but does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | No | The paper describes the model architecture and training objective but does not provide specific experimental setup details such as learning rates, batch sizes, or optimizer settings. |