Towards Training Probabilistic Topic Models on Neuromorphic Multi-Chip Systems
Authors: Zihao Xiao, Jianfei Chen, Jun Zhu
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We simulate the new algorithms and show that they are comparable with the GPC algorithms, while being suitable for NMS implementation. Empirical results show that our online SNN algorithms are comparable with existing GPC algorithms while they have the advantage of being suitable for NMS implementation. In the experiment, we assess (1) the generalization performance and (2) the discriminative power of the proposed online SNN algorithms. [...] The datasets are KOS, Enron, NIPS, 20NG and Pubmed. |
| Researcher Affiliation | Academia | Zihao Xiao, Jianfei Chen, Jun Zhu Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab for Intell. Tech. & Sys. Center for Bio-Inspired Computing Research, Tsinghua University, Beijing, 100084, China {xiaozh15, chenjian14}@mails.tsinghua.edu.cn, dcszj@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 Sample the topic assignment ˆz for a token (w, d) and Algorithm 2 Spike CGS, where τ1(x) = log(exp x − 1) and τ2(x) = log(exp x + 1). and Algorithm 3 The online ed-Spike LDA algorithm |
| Open Source Code | No | No explicit statement or link providing access to the source code for the methodology described in the paper was found. |
| Open Datasets | No | The datasets are KOS, Enron, NIPS, 20NG and Pubmed. The NIPS results and statistics of the datasets are summarized in the appendix. No specific links, DOIs, repositories, or formal citations for the datasets themselves are provided in the main paper or its visible references. |
| Dataset Splits | No | No specific dataset split information (e.g., exact percentages, sample counts, or detailed splitting methodology) needed to reproduce the data partitioning was provided. The paper mentions "to evaluate the generalization performance, we use fold-in method to calculating perplexity following (Asuncion et al. 2009)." |
| Hardware Specification | No | No specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running the experiments were provided. The paper states "All algorithms are simulated on GPCs." |
| Software Dependencies | No | The paper mentions "We use the LIBLINEAR tool-kit (Fan et al. 2008)" but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | We set λ = 0.05 for CGS and λ = 1.05 for ed-Spike LDA. The latent representations of the training documents are used as features to build a binary/multi-class SVM classifier. As in (Zhu, Ahmed, and Xing 2012), the binary classification is to distinguish groups alt.athesism and talk.religion.misc. We use the LIBLINEAR tool-kit (Fan et al. 2008) and choose the L2-regularized L1loss with C = 1 to build the SVM. All results are averaged from 3 different runs of the algorithms. |