Event Recommendation in Event-Based Social Networks

Authors: Zhi Qiao, Peng Zhang, Chuan Zhou, Yanan Cao, Li Guo, Yanchuan Zhang

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on several real-world datasets demonstrate the utility of our method. Experiments carried out on several real data sets verify the effectiveness of our proposed model.
Researcher Affiliation Academia 1Institute of Information Engineering, Chinese Academy of Sciences 2Institute of Computing Technology and the University of the Chinese Academy of Sciences 3Victoria University, Melbourne, Australia
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found. The paper describes methods using mathematical formulas and textual descriptions, but not structured pseudocode.
Open Source Code No No explicit statement about providing open-source code for the methodology or a link to a code repository was found.
Open Datasets Yes We got the five data sets as in Table 1 for the five American cities in Meetup by extracting them from the data sets published by (Zhang and et al. 2013).
Dataset Splits No The paper states: 'For all the data sets, we randomly split them with 80% into the training sets and 20% into the test sets.' It does not explicitly mention a separate validation split.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments were provided in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., libraries, frameworks, or programming languages with versions) were mentioned in the paper.
Experiment Setup No The paper mentions 'stochastic gradient descent' for parameter learning and a sampling strategy ('randomly sample 10 events users have not joined for each negative events and and 1 event joined by the users for each positive event'). However, it does not provide specific hyperparameter values like learning rate, batch size, or number of epochs.