Weakly-Supervised Opinion Summarization by Leveraging External Information

Authors: Chao Zhao, Snigdha Chaturvedi9644-9651

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate this method on both aspect identification and opinion summarization tasks. Our experiments show that ASPMEM outperforms the state-of-the-art methods even though, unlike the baselines, it does not rely on human supervision which is carefully handcrafted for the given tasks.
Researcher Affiliation Academia Chao Zhao, Snigdha Chaturvedi Department of Computer Science University of North Carolina at Chapel Hill zhaochaocs@gmail.com, snigdha@cs.unc.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a link in footnote 4: 'Available on https://github.com/zhaochaocs/Asp Mem' but explicitly states this link is for 'the external data from six categories' (Table 1), not the source code for the methodology presented in the paper. There is no explicit statement releasing the code.
Open Datasets Yes We utilize OPOSUM, a review summarization dataset provided by Angelidis and Lapata(2018) to test the efficiency of the proposed method. This dataset contains about 350K reviews from the amazon review dataset (He and Mc Auley 2016)...
Dataset Splits Yes The annotated dataset is split into two equal parts for validation and test.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, only mentioning aspects like 'pre-trained word embeddings' and 'batch size' without hardware details.
Software Dependencies No The paper mentions software like 'Core NLP (Socher et al. 2013)' and 'Gurobi 3http://www.gurobi.com/', but it does not provide specific version numbers for these components. The year '2013' for Core NLP is not a formal version number, and the Gurobi link is to the general website, not a version-specific page.
Experiment Setup Yes We use 200-dimensional word embeddings...We train the model with batch size of 300, and optimize the objective using Adam...with a fixed learning rate of 0.001 and an early stopping on the development set. The λ is set as 100. ... The similarity threshold δ is set as 0.3. The length of the summary is limited to 100 words or less...