Multi-type Disentanglement without Adversarial Training
Authors: Lei Sha, Thomas Lukasiewicz9515-9523
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on two datasets (Yelp service reviews and Amazon product reviews) to evaluate the style-disentangling effect and the unsupervised style-transfer performance on two style types: sentiment and tense. The experimental results show the effectiveness of our model. |
| Researcher Affiliation | Academia | Lei Sha, Thomas Lukasiewicz Department of Computer Science, University of Oxford, UK {lei.sha, thomas.lukasiewicz} @cs.ox.ac.uk |
| Pseudocode | No | The paper describes the model and training steps in text, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper provides a link for 'tense label files' (data), but there is no explicit statement or link to the source code for the methodology itself. |
| Open Datasets | Yes | We tested our method on two datasets: Yelp Service Reviews (Shen et al. 2017) and Amazon Product Reviews (Fu et al. 2018). |
| Dataset Splits | No | The paper mentions external classifier performance on a 'validation set' but does not provide specific details about the training/validation/test splits used for its own main experimental data. |
| Hardware Specification | No | The acknowledgments mention 'Oxford s Advanced Research Computing (ARC) facility', 'EPSRC-funded Tier 2 facility JADE', and 'GPU computing support by Scan Computers International Ltd.', but do not provide specific details like GPU/CPU models or memory. |
| Software Dependencies | No | The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) required to replicate the experiment. |
| Experiment Setup | No | The paper mentions that 'λa, λc, λs, and λm in Eq. 15 are predefined hyperparameters' but does not provide their specific values or other detailed experimental setup parameters like learning rate, batch size, or optimizer settings. |