Adversarial Sequence Tagging

Authors: Jia Li, Kaiser Asif, Hong Wang, Brian D. Ziebart, Tanya Berger-Wolf

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate the effectiveness of our proposed AST model. Table 3: Per-variable accuracy for the three approaches on different datasets. Table 4 shows the amount of time required to make predictions for all of the testing sequences.
Researcher Affiliation Academia Department of Computer Science, University of Illinois at Chicago, Chicago, IL {jli213, kasif2,hwang207, bziebart, tanyabw}@uic.edu
Pseudocode Yes Algorithm 1 Single Oracle Game Solver. Algorithm 2 Parameter Estimation Algorithm.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described in the paper.
Open Datasets Yes Human Activity Recognition Dataset [Reyes-Ortiz et al., 2015]. Baboon Activity Recognition Dataset [Strandburg-Peshkin et al., 2015; Crofoot et al., 2015]. FAQ Segmentation Dataset [Mc Callum et al., 2000].
Dataset Splits Yes We selected the regularization weights using a validation set (approximately 10% of the data). We use a validation set of 10% of the data for selecting the parameter c which controls the trade-off between slack and the magnitude of the weights vectors, and default parameters for the remaining settings.
Hardware Specification No The paper does not provide specific hardware details (like exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like LBFGS, SVM hmm, SVM light, and Gurobi, but does not provide specific version numbers for these software components to ensure reproducibility.
Experiment Setup No The paper mentions using stochastic gradient descent and selecting regularization weights and a parameter 'c' using a validation set, but it does not provide specific values for hyperparameters or detailed training configurations (e.g., learning rate, batch size, number of epochs).