Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Crowdfunding Dynamics Tracking: A Reinforcement Learning Approach
Authors: Jun Wang, Hefu Zhang, Qi Liu, Zhen Pan, Hanqing Tao6210-6218
AAAI 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments on the Indiegogo dataset not only demonstrate the effectiveness of our methods, but also validate our assumption that the entire pattern learned by TC3-Options is indeed the U-shaped one. |
| Researcher Affiliation | Academia | Jun Wang,1 Hefu Zhang,1 Qi Liu,1 Zhen Pan,1 Hanqing Tao1 1Anhui Province Key Laboratory of Big Data Analysis and Application, School of Computer Science and Technology, University of Science and Technology of China EMAIL, EMAIL, |
| Pseudocode | No | The paper describes the algorithms and models using mathematical formulations and descriptive text, but it does not include any explicit pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not provide any statement regarding the public availability of its source code, nor does it include links to a code repository or mention code in supplementary materials. |
| Open Datasets | No | The paper states, 'We collect a real-world dataset from Indiegogo,' but it does not provide a specific link, DOI, or formal citation to make this dataset publicly accessible. It describes the dataset's characteristics but offers no concrete access information. |
| Dataset Splits | No | The paper states, 'First we randomly select 10% of all campaigns in our dataset as the testing set.' However, it does not explicitly specify percentages or counts for a training set and a validation set split; only the test set is detailed. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for experiments, such as CPU or GPU models, memory, or cloud computing resources. |
| Software Dependencies | No | The paper mentions techniques like GRU layer, word2vec embedding, and specific frameworks like actor-critic, but it does not list any specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x). |
| Experiment Setup | Yes | Parameter Setting. For the static features of campaigns, we adopt one-hot encoding for categorical features and word2vec embedding (Mikolov et al. 2013) for textual features (each with a 50-dimensional vector). Finally, all kinds of static features are concatenated to 182-dimensional vectors. While for the dynamic features, they are 19dimensional vectors composed of textual comments (16dimensional by word2vec embedding), day information (2dimensional) and current funding progress. Specially, all kinds of features are scaled by Min-Max normalization. Additionally, considering the shortest funding duration of campaigns in our training data and testing data is 15, we set the length of days to be predicted to be 6, 7, 8, 9, 10 respectively. With respect to the coefficients of regularization terms, we set λ1, λ2 and λ3 to be 100, 1 and 1 respectively. |