Time-evolving Text Classification with Deep Neural Networks

Authors: Yu He, Jianxin Li, Yangqiu Song, Mutian He, Hao Peng

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world news data demonstrate that our approaches greatly and consistently outperform traditional neural network models in both accuracy and stability.
Researcher Affiliation Academia 1Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China 2State Key Laboratory of Software Development Environment, Beihang University, China 3Department of Computer Science and Engineering, HKUST, Hong Kong
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/RingBDStack/ Time-evolving-Classification.
Open Datasets Yes NYTimes 1997.01-2006.12 627,915 629 26 RCV1 1996.09-1997.08 403,143 240 12
Dataset Splits No The paper mentions using NYTimes and RCV1 datasets for evaluation but does not explicitly state the training, validation, and test dataset splits (e.g., percentages or counts) nor does it cite predefined splits for these datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions using word2vec without specifying its version and describes general training parameters, but does not provide specific software dependencies with version numbers.
Experiment Setup Yes For training neural network models, we use minibatch stochastic gradient descent (SGD) optimizer to minimize the corresponding objective functions, and the common parameters are empirically set, such as batch size as 128 and moving average as 0.999, etc.