Enhancing Text Generation via Multi-Level Knowledge Aware Reasoning
Authors: Feiteng Mu, Wenjie Li
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our method on two widely used datasets, experimental results demonstrate the effectiveness of our framework to text generation. |
| Researcher Affiliation | Academia | Feiteng Mu , Wenjie Li The Department of Computing, The Hong Kong Polytechnic University, Hong Kong {csfmu,cswjli}@comp.polyu.edu.hk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | Yes | The stories come from ROCStories [Mostafazadeh et al., 2016] corpus. Following [Yao et al., 2019], we randomly split the dataset into 8:1:1 for training, validating and testing. Abductive NLG (αNLG) is to generate an explanatory hypothesis given two observations: O1 as the cause and O2 as the consequence. We use the official data split. |
| Dataset Splits | Yes | Following [Yao et al., 2019], we randomly split the dataset into 8:1:1 for training, validating and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using BART model and Adam optimizer but does not specify software names with version numbers for reproducibility. |
| Experiment Setup | Yes | To train the model, we use the Adam optimizer with β1 = 0.9, β2 = 0.999, ϵ = 10 6 and linearly decrease learning rate to zero with no warmup. We search for the best hyper-parameters according to BLEU-2 on the development set of each dataset. At the inference stage, we adopt beam search decoding with a beam size of 3 for our model and all the baselines we produce. |