Modeling Scientific Influence for Research Trending Topic Prediction
Authors: Chengyao Chen, Zhitao Wang, Wenjie Li, Xu Sun
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments conducted on a scientific dataset including conferences in artificial intelligence and data mining show that our model consistently outperforms the other state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong 2MOE Key Laboratory of Computational Linguistics, Peking University, China 3School of Electronics Engineering and Computer Science, Peking University, China |
| Pseudocode | No | The paper describes the model using mathematical equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We obtain the paper information of the above-mentioned conferences from a DBLP dataset published by (Tang et al. 2008) and updated in 2016. |
| Dataset Splits | Yes | The first 70% data is used for training, the following 15% data for the validation and the remaining 15% for testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as GPU/CPU models or other system specifications. |
| Software Dependencies | No | The paper mentions the use of GRU, continuous bag-of-words architecture, and the Adam algorithm, but it does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | The word representations with the dimensionality of 50 are trained... We also set the dimension of hidden state as 50 for CONI, CONI I and CONI V. |