Hyperbolic Disentangled Representation for Fine-Grained Aspect Extraction
Authors: Chang-Yu Tai, Ming-Yao Li, Lun-Wei Ku11358-11366
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to previous baselines, HDAE achieves average F1 performance gains of 18.2% and 24.1% on Amazon product review and restaurant review datasets, respectively. In addition, the embedding visualization experience demonstrates that HDAE is a more effective approach to leveraging seed words. An ablation study and a case study further attest the effectiveness of the proposed components. Experiments and Results Datasets We used Amazon product reviews from the OPOSUM dataset (Angelidis and Lapata 2018) and restaurant reviews from the Sem Eval-2016 Aspect-based Sentiment Analysis task (Pontiki et al. 2016). |
| Researcher Affiliation | Academia | Chang-You Tai1, Ming-Yao Li1, Lun-Wei Ku1 1Academia Sinica, Taipei, Taiwan |
| Pseudocode | Yes | Algorithm 1: HDAE Learning |
| Open Source Code | Yes | 2The codes is at https://github.com/johnnyjana730/HDAE/ |
| Open Datasets | Yes | Datasets We used Amazon product reviews from the OPOSUM dataset (Angelidis and Lapata 2018) and restaurant reviews from the Sem Eval-2016 Aspect-based Sentiment Analysis task (Pontiki et al. 2016). |
| Dataset Splits | No | The paper states 'During training, seed words are provided but not segment aspect labels. Details are provided in the appendix.' and refers to standard datasets, but does not explicitly provide specific training/validation/test split percentages, sample counts, or a detailed splitting methodology within the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions some parameters like β, c, τ, and λ, and the use of the Adam optimizer, but does not provide their specific values (hyperparameters, learning rate, batch size, number of epochs, etc.) within the main text. It defers some parameter sensitivity study details to an appendix in an arXiv version. |