Salience Estimation via Variational Auto-Encoders for Multi-Document Summarization
Authors: Piji Li, Zihao Wang, Wai Lam, Zhaochun Ren, Lidong Bing
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the benchmark datasets DUC and TAC show that our framework achieves better performance than the state-of-the-art models. |
| Researcher Affiliation | Collaboration | Piji Li, Zihao Wang, Wai Lam, Zhaochun Ren, Lidong Bing Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong University College London, London, UK AI Platform Department, Tencent Inc., Shenzhen, China |
| Pseudocode | No | The paper describes its methodology through narrative text and mathematical equations, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper mentions using a package called 'lp solve1' and provides its URL (http://lpsolve.sourceforge.net/5.5/), but there is no statement or link indicating that the authors' own source code for their proposed framework is publicly available. |
| Open Datasets | Yes | The standard MDS datasets from DUC and TAC are used in our experiments. DUC 2006 and DUC 2007 contain 50 and 45 topics respectively... TAC 2011 is the latest standard summarization benchmark data set... |
| Dataset Splits | Yes | TAC 2010 is used as the parameter tuning data set of our TAC evaluation. |
| Hardware Specification | Yes | Our neural network based framework is implemented using Theano (Bastien et al. 2012) on a single GPU3. 3Tesla K80, 1 Kepler GK210 is used, 2496 Cuda cores, 12G GDDR5 memory. |
| Software Dependencies | No | The paper mentions 'Theano' and 'Adam' as software used, but does not provide specific version numbers for them. It mentions 'lp solve1' with version '5.5' but this is a third-party ILP solver used in the final summary generation step, not a core dependency for the neural network framework itself. |
| Experiment Setup | Yes | For the number of aspects, we let m = 5. For the neural network framework, we set the hidden size dh = 500 and the latent size K = 100. For the optimization objective, we let λz = 1, λh = 400, λx = 800, and λ = 1. Adam (Kingma and Ba 2014) is used for gradient based optimization with a learning rate 0.001. |