COBRA: Context-Aware Bernoulli Neural Networks for Reputation Assessment

Authors: Leonit Zeynalvand, Tie Luo, Jie Zhang7317-7324

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of COBRA is validated by our experiments using a real dataset, and by our simulations, where we also show that COBRA outperforms other state-of-the-art TRM systems.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Department of Computer Science, Missouri University of Science and Technology, USA leonit001@e.ntu.edu.sg, tluo@mst.edu, zhangj@ntu.edu.sg
Pseudocode Yes Algorithm 1 Training data initialization, Algorithm 2 Update training data vertically, Algorithm 3 Update training data horizontally
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the methodology described is open-source or publicly available.
Open Datasets Yes Dataset. We use a public dataset obtained from (Zheng, Zhang, and Lyu 2014) which contains the response-time values of 4, 532 web services invoked by 142 service users over 64 time slices. The dataset contains 30, 287, 611 records of data in total, which translates to a data sparsity of 26.5%.
Dataset Splits Yes We employ 10-fold cross validation and compare the performance of COBRA with the benchmark methods described in Section 5.1.
Hardware Specification No All measurements are conducted using the same Linux workstation with 12 CPU cores and 32GB of RAM. The paper does not specify the exact CPU model or any GPU used.
Software Dependencies No The functional API of Keras is used for the implementation of the neural network architectures on top of Tensor Flow backend while scikit-learn is used for the implementation of Gaussian process, decision tree, and Gaussian Naive Bayes models. However, no specific version numbers for these software components are provided.
Experiment Setup No The paper describes the network topology (N=3 layers, width calculation), activation functions (sigmoid, ReLU), and loss function (cross-entropy), and states that weights are computed using gradient descent backpropagation. However, it does not provide specific hyperparameters such as learning rate, batch size, or number of epochs for training.