Adaptive Region Embedding for Text Classification
Authors: Liuyu Xiang, Xiaoming Jin, Lan Yi, Guiguang Ding7314-7321
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively evaluate our method on 8 benchmark datasets for text classification. The experimental results prove that our method achieves state-of-the-art performances and effectively avoids word ambiguity. |
| Researcher Affiliation | Collaboration | Liuyu Xiang,1 Xiaoming Jin,1 Lan Yi,2 Guiguang Ding1 1School of Software, Tsinghua University, Beijing, China 2Department of Dev Net, Cisco Systems xiangly17@mails.tsinghua.edu.cn, xmjin@tsinghua.edu.cn, layi@cisco.com, dinggg@tsinghua.edu.cn |
| Pseudocode | No | The paper describes the method using equations and text, but no explicit pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We report results on 8 benchmark datasets for large-scale text classification. These datasets are from (Zhang, Zhao, and Le Cun 2015) and the tasks involve topic classification, sentiment analysis, and ontology extraction. The details of the dataset can be found in Table 2. |
| Dataset Splits | No | Table 2 provides 'Train Size' and 'Test Size' for each dataset, but there is no explicit mention of a separate validation split or how validation was performed. |
| Hardware Specification | No | The paper does not mention any specific hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Standford tokenizer', 'Adam optimizer (Kingma and Ba 2014)', and 'Batch normalization layer (Ioffe and Szegedy 2015)' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Hyperparameters We tune the region size (2c + 1) to be 9, embedding size to be 256. ... We choose the batch size to be 16 and the learning rate to be 1 10 4 with Adam optimizer (Kingma and Ba 2014), no regularization method is used here. |