Efficient Online Learning for Mapping Kernels on Linguistic Structures

Authors: Giovanni Da San Martino, Alessandro Sperduti, Fabio Aiolli, Alessandro Moschitti3421-3428

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Additionally, we derive a reliable empirical evidence on semantic role labeling task, which is a natural language classification task, highly dependent on syntactic trees. The results show that our faster approach can clearly improve on standard kernel-based SVMs, which cannot run on very large datasets.
Researcher Affiliation Collaboration Giovanni Da San Martino Qatar Computing Research Institute, Hamad Bin Khalifa University Doha, Qatar gmartino@hbku.edu.qa Alessandro Sperduti, Fabio Aiolli Department of Mathematics, University of Padova via Trieste, 63, Padova, Italy {sperduti, aiolli}@math.unipd.it Alessandro Moschitti Amazon Manhattan Beach, CA, USA amosch@amazon.com
Pseudocode No The paper describes algorithmic procedures and equations, such as the kernelized perceptron and the score computation, but does not present them within clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes As a referring dataset, we used Prop Bank along with Penn Tree bank 2 (Marcus, Santorini, and Marcinkiewicz 1993).
Dataset Splits Yes Table 1: Statistics of syntactic trees in the boundary detection dataset. Training Validation Test Num. of trees 4,079,510 234,416 149,140
Hardware Specification No The paper discusses computational time and memory usage but does not provide specific details about the hardware (e.g., CPU, GPU models, or memory specifications) used for the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation of the experiments.
Experiment Setup Yes In the case of the TKs and combination, we used the following parameters: λ {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} (TKs decay factor) and γ {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9} (weighting the linear combination between tree and polynomial kernels).