Jointly Modeling Topics and Intents with Global Order Structure
Authors: Bei Chen, Jun Zhu, Nan Yang, Tian Tian, Ming Zhou, Bo Zhang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments in both unsupervised and supervised settings, results show the superiority of our model over several state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | Dept. of Comp. Sci. & Tech., State Key Lab of Intell. Tech. & Sys., Center for Bio-Inspired Computing Research, TNList, Tsinghua University, Beijing, 100084, China Microsoft Research Asia, Beijing, 100080, China |
| Pseudocode | Yes | As shown in Fig.3, we present an approximate three-step algorithm to obtain the canonical permutation π0. Step 1: We compute π d for each labeled document sd... Step 2: We introduce variables gij(i, j [K])... Step 3: We obtain the π0 by calculating the topological sequence of G. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use two real datasets: 1) Chemical (Guo et al. 2010): It contains 965 abstracts of scientific papers...; and 2) Elements (Chen et al. 2009): It consists of 118 articles from the English Wikipedia... |
| Dataset Splits | No | For supervised classification, the paper states 'we randomly choose 20% documents; annotate their sentences with intent labels; and use them for training. Our goal is to learn the intent labels for the sentences in the remaining 80% documents.' However, it does not explicitly mention a separate validation set or split. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'SVMLight tools (Joachims 1998)' but does not provide specific version numbers for it or any other software dependencies. |
| Experiment Setup | Yes | For hyperparameters, we set θ0 = 0.1, λ0 = 0.1, α0 = 0.1, β0 = 0.1 and γ0 = 1... ν0 is set to be 0.1 times the number of documents in the corpus. For EGMM-LDA, we set the regularization parameter c to be 0.1. We set ρ0 = 2 for all the experiments except for Elements with K = 10, in which we set ρ0 = 1. |