Attentional Image Retweet Modeling via Multi-Faceted Ranking Network Learning

Authors: Zhou Zhao, Lingtao Meng, Jun Xiao, Min Yang, Fei Wu, Deng Cai, Xiaofei He, Yueting Zhuang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments on a large-scale dataset from Twitter site shows that our method achieves better performance than other state-of-the-art solutions to the problem.
Researcher Affiliation Academia 1 College of Computer Science, Zhejiang University 2 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 3 State Key Lab of CAD&CG, Zhejiang University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states, 'The dataset will be released later for further study,' but does not provide any information or links regarding the release of open-source code for the described methodology.
Open Datasets No The paper states, 'We collect data from Twitter,' and 'The dataset will be released later for further study,' but does not provide concrete access information or a citation for a currently public dataset.
Dataset Splits Yes We sort users retweet behaviors based on their timestamp and use the first 60%, 70% and 80% of data as training set and the remaining for testing, so the training and testing data do not have overlap. The validation data is obtained separately from the training and testing data.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper mentions software components like VGGNet and LSTM networks but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We set the learning rate to 0.01 for the gradient method. The weights of deep neural networks are randomly initialized by a Gaussian distribution with zero mean in our experiments. We vary the dimension of user preference representation from 100, 200, to 400.