PI-Bully: Personalized Cyberbullying Detection with Peer Influence
Authors: Lu Cheng, Jundong Li, Yasin Silva, Deborah Hall, Huan Liu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental evaluations on real-world datasets corroborate the effectiveness of the proposed framework. In this section, we present experimental results to evaluate the effectiveness of the proposed PI-Bully model. |
| Researcher Affiliation | Academia | 1Computer Science and Engineering, Arizona State University 2Mathematical and Natural Sciences, Arizona State University 3Social and Behavioral Sciences, Arizona State University {lcheng35,jundongl,ysilva,d.hall,huanliu}@asu.edu |
| Pseudocode | No | The paper includes a workflow diagram (Figure 1) and mentions optimization algorithms (ADMM, FISTA) but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link for the dataset ('http://www.public.asu.edu/~lcheng35/'), but there is no explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Our dataset can be downloaded from http://www.public.asu.edu/ lcheng35/. |
| Dataset Splits | Yes | In the experiments, we use 80% of the datasets for training and the rest for testing, the averaged classification results based on ten runs are shown in Tables 2-3. We select the hyperparameters based on cross-validation on the training data. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using Linguistic Inquiry Word Count (LIWC) for psychometric analysis and refers to standard machine learning models, but it does not provide specific version numbers for any software dependencies or libraries used for the model implementation. |
| Experiment Setup | Yes | In the experiments, we use 80% of the datasets for training and the rest for testing, the averaged classification results based on ten runs are shown in Tables 2-3. We select the hyperparameters based on cross-validation on the training data. To investigate the effect of these two parameters, we fix one parameter at a time (λ1=1e-7,λ2=1e-7 respectively) and vary the other one to evaluate how it affects the classification performance. We vary the values of λ1 and λ2 among {1e-7,1e-5,1e-3,0.1,10} and show the AUC score in Fig. 3. |