Generating CCG Categories
Authors: Yufang Liu, Tao Ji, Yuanbin Wu, Man Lan13443-13451
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the CCGBank show that supertagging with generation can outperform a strong classification baseline. With various decoding oracles and a simple reranker, the tagger achieves the state-of-the-art supertagging accuracy (95.5%, without using additional external resources, 96.1% with BERT). |
| Researcher Affiliation | Academia | Yufang Liu, Tao Ji, Yuanbin Wu, Man Lan School of Computer Science and Technology, East China Normal University |
| Pseudocode | Yes | Table 1: The transition system of generating categories. |
| Open Source Code | No | The paper does not provide a direct link or explicit statement about the availability of the source code for the described methodology. It cites other works but does not provide its own. |
| Open Datasets | Yes | We conduct experiments mainly on CCGBank (Hockenmaier and Steedman 2007). We also test our models on the news corpus of the Italian CCGBank(Johan, Bosco, and Mazzei 2009). |
| Dataset Splits | Yes | We follow the standard splits of CCGBank using section 02-21 for training set, section 00 for development set, and section 23 for test set. We use period to spilt the dataset and get 740 sentences as train/dev/test(8:1:1). |
| Hardware Specification | No | The paper mentions 'Constrained by our hardware platform' but does not provide specific details such as CPU/GPU models, memory, or cloud computing resources used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | Constrained by our hardware platform, instead of using the default setting, we evaluate a smaller model (the batch size becomes 128, the dimensions of the encoder and the decoder LSTM are decreased to 300 and 200). The settings of network hyperparameters are in the supplementary. |