Learning Temporal Dynamics of Behavior Propagation in Social Networks
Authors: Jun Zhang, Chaokun Wang, Jianmin Wang
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world datasets evaluated the family of BPMs and demonstrated the effectiveness of our proposed approach. We conduct extensive experiments on real-world datasets to evaluate the performance of our proposed CIBP model and the whole behavior propagation model family. Results show that the CIBP outperforms both the state-of-the-art static and dynamic models, and can improve the performance of behavior prediction significantly. |
| Researcher Affiliation | Academia | Jun Zhang Department of Computer Science and Technology Tsinghua University Beijing 100084, P. R. China Chaokun Wang and Jianmin Wang School of Software Tsinghua University Beijing 100084, P. R. China |
| Pseudocode | No | The paper describes the Expectation-Maximization (EM) algorithm and its steps for model inference but does not present it in a pseudocode block or algorithm-like format. |
| Open Source Code | No | The paper does not contain any explicit statement about the availability of source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We construct 5 real-world datasets from real-world academic collaborative social networks, including sci-comp (Science Computation), comp-edu (Computer Education), simu (Simulation) and sec-priv (Security & Privacy) from Microsoft Academic Search2, and comp-ling (Computational Linguistics) from the ACL Anthology Network3. |
| Dataset Splits | No | The paper states, "For each dataset, we collected the data in 1981 2000 for model training, and the next 5 years were taken for testing." It specifies a temporal train/test split but does not mention a distinct validation set, its size, or any specific percentages for a validation split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models, memory specifications, or server types. |
| Software Dependencies | No | The paper refers to algorithms and methods (e.g., EM algorithm, linear basis function model, gradient descent, Newton methods) but does not list any specific software libraries, packages, or programming languages with their version numbers that were used for implementation or experimentation. |
| Experiment Setup | Yes | For each dataset, we collected the data in 1981 2000 for model training, and the next 5 years were taken for testing. We found our data quite sparse, and thus split the data by year for discrete models. To be fair, one year is taken as a basic time unit for continuous models. For each user at each year, only the items had ever adopted by her friends are considered for training and test because our focus is the direct behavior propagation. The items adopted by the user are positive instances and others are negative. Here we only predict the occurrence of each behavior and don’t consider the number of the occurrences. We evaluate their prediction performance using MAP (Mean Average Precision) and AUC (Area Under the ROC Curve). In this study we consider 6 types of popular basis functions, including the linear, polynomial, quadratic, Gaussian, sigmoid and exponential function. When J = 0, our CIBP degenerates to the static IBP and performs worst. Increasing J improves the performance at first and achieves the peak at J = 3. |