A Stratified Strategy for Efficient Kernel-Based Learning

Authors: Simone Filice, Danilo Croce, Roberto Basili

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed strategy has been integrated within two well-known algorithms: Support Vector Machines and Passive-Aggressive Online classifier. A significant cost reduction (up to 90%), with a negligible performance drop, is observed against two Natural Language Processing tasks, i.e. Question Classification and Sentiment Analysis in Twitter. Experimental Evaluations In this section the stratified strategy is evaluated w.r.t. to the Question Classification and Sentiment Analysis tasks.
Researcher Affiliation Academia Simone Filice( ), Danilo Croce( ), Roberto Basili( ) ( ) Dept. of Civil Engineering and Computer Science Engineering ( ) Dept. of Enterprise Engineering University of Roma, Tor Vergata, Italy {filice,croce,basili}@info.uniroma2.it
Pseudocode Yes Algorithm 1 c-level Stratified Classifier, Algorithm 2 c-level Stratified SVM learning, Algorithm 3 c-level Stratified PA Classifier
Open Source Code No The paper does not provide explicit statements or links indicating the release of open-source code for the described methodology.
Open Datasets Yes We used the UIUC dataset (Li and Roth 2006). The evaluation of the stratified approach is carried out on the Sem Eval-2013 Task 2 corpus, (Wilson et al. 2013).
Dataset Splits Yes Classifier parameters are tuned with a Repeated Random Sub-sampling Validation, consisting in a 10-fold validation strategy. It is composed by a training set of 5,452 questions and a test set of 500 questions, organized in 6 coarse-grained classes. The training dataset is composed of 10, 205 annotated tweets, while the test dataset is made of 3, 813 tweets.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions algorithms and tools like Support Vector Machines and Passive Aggressive algorithm, but does not provide specific software dependencies with version numbers.
Experiment Setup Yes Classifier parameters are tuned with a Repeated Random Sub-sampling Validation, consisting in a 10-fold validation strategy. The SVD reduction is then applied with a dimensionality cut of d = 250. The ambiguity margin from 0.1 to 1. The ambiguity margins m1 = m2 = 1. A co-occurrence Word-Space with a window of size 3 is acquired.