Recurrent Attentional Topic Model
Authors: Shuangyin Li, Yu Zhang, Rong Pan, Mingzhi Mao, Yang Yang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on two copora show that our model outperforms state-of-the-art methods on document modeling and classification. |
| Researcher Affiliation | Collaboration | Shuangyin Li Department of Computer Science and Engineering Hong Kong University of Science and Technology, China shuangyinli@cse.ust.hk, Yu Zhang Department of Computer Science and Engineering Hong Kong University of Science and Technology, China zhangyu@cse.ust.hk, Rong Pan School of Data and Computer Science Sun Yat-sen University, China panr@sysu.edu.cn, Mingzhi Mao School of Data and Computer Science Sun Yat-sen University, China mcsmmz@mail.sysu.edu.cn, Yang Yang i PIN Shenzhen, China yangyang@ipin.com |
| Pseudocode | No | The paper describes a 'generative process' for RABP and RATM, but it is presented in prose and numbered steps, not as structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | No | The paper uses 'a subset of the Wikipedia' which is a well-known public resource. However, for the 'New York Times (NYTimes)' corpus, the paper states it obtained 'news articles from New York Times (NYTimes) from January 1st, 2016 to May 8th, 2016' but does not provide a specific link, DOI, repository name, or a formal citation to this specific collection of NYTimes articles to confirm public availability. |
| Dataset Splits | Yes | In each corpus, 80% documents are used for training and the rest is for testing. That is, for the Wikipedia corpus, there are 20,000 documents for training and 4,000 documents for testing. For the NYTimes corpus, 22,000 documents are used for training and 5,523 documents for testing. Classification results on the Wikipedia corpus for different models with 5-fold cross-validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'LIBSVM' but does not provide specific version numbers for it or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | No | The paper does not contain specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size) or detailed training configurations for the proposed model. |