Dynamic Rank Factor Model for Text Streams
Authors: Shaobo Han, Lin Du, Esther Salazar, Lawrence Carin
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The modeling framework is illustrated on two real datasets: the US State of the Union Address and the JSTOR collection from Science. and 4 Experiments |
| Researcher Affiliation | Academia | Duke University, Durham, NC 27708 {shaobo.han, lin.du, esther.salazar, lcarin}@duke.edu |
| Pseudocode | No | The paper describes the sampling steps for the Gibbs sampler and FFBS algorithm in text, but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its own source code or a link to a code repository for the methodology described. |
| Open Datasets | No | The paper mentions using 'the US State of the Union Address' and 'the JSTOR collection from Science' datasets but does not provide concrete access information (e.g., specific links, DOIs, or formal citations for the datasets themselves). |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper describes algorithms and statistical methods but does not specify any software libraries or dependencies with version numbers used for implementation. |
| Experiment Setup | Yes | We set hyperparameters a = b = e = f = 0.5, d = P, h = 1, σ2 s = 1. For weakly informative priors, we set α = β = 0.01; µ0 = 0.5, σ2 0 = 10. and We handle about 2700 documents per iteration (subsampling rate: 2%). and We learn K = 25 topics. and We learn K = 50 topics. |