A Deep Modular RNN Approach for Ethos Mining
Authors: Rory Duthie, Katarzyna Budzynska
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Annotation of ethotic statements is reliable and its extraction is robust (macro-F1 = 0.83), while annotation of polarity is perfect and its extraction is solid (macro-F1 = 0.84). Table 4 provides +/-ESE results using machine learning classifiers, TF-IDF vectorization and a combination of lexicons. Table 5 shows the results of the ESE/n-ESE classification with a combination of standard machine learning algorithms and RNN and CNN models compared against a baseline classifier using the training set distributions and against the ESE/n-ESE classification from [Duthie et al., 2016]. |
| Researcher Affiliation | Academia | 1 Centre for Argument Technology, University of Dundee 2 Centre for Argument Technology, Institute of Philosophy and Sociology, Polish Academy of Sciences |
| Pseudocode | No | The paper describes a pipeline and a neural network architecture but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using the keras deep learning library (https://github.com/ fchollet/keras) but does not state that their own implementation code for the DMRNN or ethos mining pipeline is open-source or publicly available. |
| Open Datasets | Yes | We construct our publicly available corpus, Ethos Hansard1 (http://arg.tech/Ethos_Hansard1, see Table 1) |
| Dataset Splits | Yes | We split our data into 60 training transcripts, of which we use 10% as validation data, and 30 test transcripts to give a wide range of test cases. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions using the keras deep learning library, scikit-learn, and the Stanford parser, but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Table 3: Hyper-parameter values for the DMRNN. Embedding dimension input length: 400, output: 128 Hidden layer dimension 10 POL hidden layer 30 Dropout rate 0.20 LSTM layer 128 Max pooling size 4 Adam α: 0.001, β1: 0.9, β2: 0.999 |