Neural Dynamic Focused Topic Model

Authors: Kostadin Cvejoski, Ramsés J. Sánchez, César Ojeda

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model on three different datasets (the UN general debates, the collection of NEURIPS papers, and the ACL Anthology dataset) and show that it (i) outperforms stateof-the-art topic models in generalization tasks and (ii) performs comparably to them on prediction tasks, while employing roughly the same number of parameters, and converging about two times faster.
Researcher Affiliation Collaboration Kostadin Cvejoski1, 2, Rams es J. S anchez1, 4, C esar Ojeda3 1 Lamarr-Institute for Machine Learning and Artificial Intelligence 2 Fraunhofer-Institute for Intelligent Analysis and Information Systems (IAIS) 3 University of Potsdam 4 BIT University of Bonn
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found in the paper.
Open Source Code Yes Source code: https://github.com/cvejoski/Neural-Dynamic Focused-Topic-Model
Open Datasets Yes We evaluate our model on three datasets, namely the collection of UN speeches, NEURIPS papers and the ACL Anthology. The UN dataset3 (Baturo, Dasandi, and Mikhaylov 2017) (...) 3https://www.kaggle.com/unitednations/un-general-debates The NEURIPS dataset4 (...) 4https://www.kaggle.com/benhamner/nips-papers Finally, the ACL Anthology (Bird et al. 2008)
Dataset Splits No The paper mentions that
Hardware Specification No No specific hardware details (such as GPU models, CPU types, or cloud instance specifications) used for running experiments were provided in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) were explicitly stated in the paper. The paper mentions referring to supplementary material for setup details, but these are not provided in the main text.
Experiment Setup Yes Specifically, K was chosen from the set 50, 100 and 200. We found 50 to be the best value for all models, i.e. including the baselines. Similarly α0 was chosen from the set 0.1, 0.5, 1.0