Trust Prediction with Propagation and Similarity Regularization
Authors: Xiaoming Zheng, Yan Wang, Mehmet Orgun, Youliang Zhong, Guanfeng Liu
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on a realworld dataset illustrate significant improvement delivered by our approach in trust prediction accuracy over the state-of-the-art approaches. |
| Researcher Affiliation | Academia | 1Department of Computing, Macquarie University, Sydney, NSW 2109, Australia {xiaoming.zheng, yan.wang, mehmet.orgun, youliang.zhong}@mq.edu.au 2School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, China 215006 gfliu@suda.edu.cn |
| Pseudocode | No | The paper provides mathematical formulations for its model but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that its source code is publicly available. |
| Open Datasets | Yes | The dataset Advotago2 used in our experiments is obtained from a trust-based social network. ... 2http://www.trustlet.org/wiki/advogato dataset. |
| Dataset Splits | Yes | In total, we have conducted three groups of experiments with different percentages (80%, 60% and 40%) of the data for training. ... For model validation, we have conducted repeated random sub-sampling for 10 times in each experiment. Finally, each model is experimented with 300 times (3 different percentages 10 different initial matrices 10 times cross validations). |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU/CPU models or memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the gradient descent method and a real-valued Genetic Algorithm but does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | In all of the three approaches, we use the same gradient descent method for the matrix factorization process and set λ1 = λ2 = 0.01,γ = 0.1, H = 2 and l = 10. |