Contrastive Training for Models of Information Cascades
Authors: Shaobin Xu, David Smith
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then report experiments on the ICWSM Spinn3r dataset, where we can observe the true hyperlink structure for evaluation and compare the proposed method to Multi Tree (Rodriguez and Sch olkopf 2012) and Info Path (Rodriguez et al. 2014) and some simple, but effective, baselines. |
| Researcher Affiliation | Academia | Shaobin Xu, David A. Smith College of Computer and Information Science Northeastern University 440 Huntington Avenue Boston, MA, 02115 {shaobinx,dasmith}@ccs.neu.edu |
| Pseudocode | No | The paper provides mathematical derivations and descriptions of the model and its inference, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | Yes | In this paper, therefore, we train and test on documents from the ICWSM 2011 Spinn3r dataset (Burton, Kasch, and Soboroff 2011). |
| Dataset Splits | No | The paper mentions training and testing, and performs 10-fold cross-validation for supervised learning. However, it does not explicitly describe a separate 'validation' dataset split (e.g., in percentages or counts) or a distinct 'validation' set for hyperparameter tuning in a way that allows reproduction of such a split. |
| Hardware Specification | No | The paper states that Apache Spark was used for parallelizing computations but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Apache Spark' and 'L-BFGS' but does not specify version numbers for these or any other software dependencies, making the software environment irreproducible. |
| Experiment Setup | Yes | We choose batch gradient descent with a fixed learning rate 5 10 3 and report the result after 1, 500 iterations. |