ROAR: Robust Label Ranking for Social Emotion Mining

Authors: Jason (Jiasheng) Zhang, Dongwon Lee

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through comprehensive empirical validation using 4 real datasets and 16 benchmark semi-synthetic label ranking datasets, and a case study, we demonstrate the superiorities of our proposals over 2 popular label ranking measures and 6 competing label ranking algorithms.
Researcher Affiliation Academia Jason (Jiasheng) Zhang, Dongwon Lee College of Information Sciences and Technology The Pennsylvania State University, USA {jpz5181,dlee}@ist.psu.edu
Pseudocode No The paper describes the learning and prediction process of ROAR in descriptive text within the 'Robust Label Ranking Model: ROAR' section. However, it does not present this as a formally structured pseudocode block or algorithm figure.
Open Source Code Yes The datasets and implementations used in the empirical validation are available for access1. 1http://pike.psu.edu/download/aaai18/
Open Datasets Yes We use four Facebook post datasets... and 16 benchmark semi-synthetic data sets obtained by converting benchmark multi-class classification using Naive Bayes and regression data using feature-to-label technique from the UCI and Statlog repositories into label ranking (Cheng, H uhn, and H ullermeier 2009). These data sets are widely used as benchmark in label ranking works. The datasets and implementations used in the empirical validation are available for access2. 2http://pike.psu.edu/download/aaai18/
Dataset Splits Yes All results are obtained with 5-fold cross validation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. It focuses on the algorithmic and data aspects.
Software Dependencies No The paper mentions 'Alchemy Language API (by IBM Watson Lab)' and 'Python' as tools used. However, it does not specify version numbers for these or any other software dependencies, which would be necessary for reproducibility.
Experiment Setup Yes For ACC@k, k is set as 3 to mimic the behavior of Facebook, where only top3 emoticons of posts are shown by default. The decision tree in this work is a binary tree. The threshold and the feature for each split are selected by exhaustive search so that the sizes of the neighborhoods in the target space, estimated by training data in the resultant child nodes, become the smallest. The stopping criterion is straightforward. The partitioning stops when no further partitioning is possible, that is, when there is no partitioning whose criterion is smaller than Gini(T) for current node T. There is a hyperparameter k (Grbovic, Djuric, and Vucetic 2013), which is set to default value 100 for Facebook data, and slightly smaller than number of all possible rankings for each semisynthetic data. Two KNN based model takes default K = 20.