Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods
Authors: Hao Yuan, Yongjun Chen, Xia Hu, Shuiwang Ji5717-5724
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate the effectiveness of our approach, we evaluate our methods both quantitatively and qualitatively. We first introduce two datasets we are using and the setup of the experiments in detail. Next, we report the interpretation results for several sentence examples. Finally, we present the quantitative evaluations of our methods. |
| Researcher Affiliation | Academia | Hao Yuan Washington State University hao.yuan@wsu.edu Yongjun Chen Washington State University yongjun.chen@wsu.edu Xia Hu Texas A&M University hu@cse.tamu.edu Shuiwang Ji Texas A&M University sji@tamu.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | MR Dataset: The MR dataset1 contains movie review data for sentiment analysis. Each sample in the dataset is a one-sentence movie review and labeled with positive or negative . AG s News Dataset: The AG s News dataset2 is constructed from AG s corpus of news articles. |
| Dataset Splits | No | The paper provides the number of training and test examples, but does not explicitly state validation splits or percentages for any dataset. |
| Hardware Specification | Yes | We implement our approach using Tensor Flow and conduct our experiments on one Tesla K80 GPU. |
| Software Dependencies | No | The paper mentions "Tensor Flow" but does not provide a specific version number. It also mentions "word2vec" but no version. It also mentions "Adam optimizer" but no specific version. |
| Experiment Setup | Yes | For the MR dataset, the regularization parameters are set as λ1 = 0.004 and λ2 = 0.02. For the AG s News dataset, we set λ1 = 0.002 and λ2 = 0.01. The learning rate in optimization procedure is set to 2 e 4 and we apply the Adam optimizer (Kingma and Ba 2014) with momentum parameters β1 = 0.9 and β2 = 0.999. |