Constraining Linear-chain CRFs to Regular Languages
Authors: Sean Papay, Roman Klinger, Sebastian Pado
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model empirically as the output layer of a neural network and attain state-of-the-art performance for semantic role labeling (Weischedel et al., 2011; Pradhan et al., 2012).6 SYNTHETIC DATA EXPERIMENTS7 REAL-WORLD DATA EXPERIMENT: SEMANTIC ROLE LABELING |
| Researcher Affiliation | Academia | Sean Papay, Roman Klinger, & Sebastian Pad o University of Stuttgart (sean.papay|klinger|pado)@ims.uni-stuttgart.de |
| Pseudocode | Yes | Algorithm 1: Construction of an FSA from given sets of core, noncore, and continuation roles. |
| Open Source Code | Yes | To encourage the use of Reg CCRFs, we provide an implementation as a Python library under the Apache 2.0 license which can be used as a drop-in replacement for standard CRFs in Py Torch.5 Available at www.ims.uni-stuttgart.de/en/research/resources/tools/regccrf/ |
| Open Datasets | Yes | we work with the Onto Notes corpus as used in the Co NLL 2012 shared task1 (Weischedel et al., 2011; Pradhan et al., 2012), whose training set comprises 66 roles.1As downloaded from https://catalog.ldc.upenn.edu/LDC2013T19, and preprocessed according to https://cemantix.org/data/ontonotes.html |
| Dataset Splits | Yes | Every 5000 training steps, we approximated our model s F1 score against a subset of the provided development partition, using a simplified reimplementation of the official evaluation script. |
| Hardware Specification | Yes | We performed all SRL experiments on Ge Force GTX 1080 Ti GPUs. Each experiment used a single GPU. |
| Software Dependencies | No | The paper mentions PyTorch and the Hugging Face transformers library, but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Table 3: Summary of hyperparameters for our models and experiments. Includes details such as optimizer, batch size, learning rate, and training iterations for both synthetic and SRL experiments. |