Online Classification Using a Voted RDA Method

Authors: Tianbing Xu, Jianfeng Gao, Lin Xiao, Amelia Regan

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We examine the method using ℓ1-regularization on a large-scale natural language processing task, and obtained state-of-the-art classification performance with fairly sparse models. Table 1: Comparing performance of different algorithms. Figure 1: Different sparse feature structure by different regularization λ for v RDA with hinge and log losses. Figure 2: Trade off between the model sparsity and classification accuracy for v RDA with hinge and log losses.
Researcher Affiliation Collaboration Tianbing Xu Computer Science University of California, Irvine Jianfeng Gao, Lin Xiao Microsoft Research Redmond, WA Amelia C. Regan Computer Science University of California, Irvine
Pseudocode Yes Algorithm 1 The voted RDA method (training) Algorithm 2 The voted RDA method (testing)
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We trained the predictor on Sections 2-19 of the Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993)
Dataset Splits Yes We used Section 20-21 to optimize training parameters, including the regularization weight λ and the learning rate η, and then evaluated the predictors on Section 22. The training set contains 36K sentences, while the development set and the test set have 4K and 1.7K, respectively.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies.
Experiment Setup Yes We used η = 0.05 and λ = 1e 5 for hinge loss, and η = 1000 and λ = 1e 4 for log loss.