Back to the Future – Temporal Adaptation of Text Representations

Authors: Johannes Bjerva, Wouter Kouw, Isabelle Augenstein7440-7447

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate on three challenging tasks varying in terms of time-scales, linguistic units, and domains. These tasks show our method outperforming several strong baselines, including using all available data.
Researcher Affiliation Academia 1Department of Computer Science, University of Copenhagen, Copenhagen, Denmark 2Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
Pseudocode No No section or figure labeled "Pseudocode" or "Algorithm" was found. The paper describes the method conceptually and mathematically.
Open Source Code No No explicit statement providing access to the authors' source code or a direct repository link for their methodology was found.
Open Datasets Yes We consider three tasks representing a broad selection of natural language understanding scenarios: paper acceptance prediction based on the Peer Read data set (Kang et al. 2018), Named Entity Recognition (NER) based on the Broad Twitter Corpus (Derczynski, Bontcheva, and Roberts 2016), and author stance prediction based on the Rumour Eval19 data set (Gorrell et al. 2018).
Dataset Splits Yes For each year, we divide the data into 80/10/10 splits for training, development, and test.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were provided.
Software Dependencies No No specific software dependencies with version numbers (e.g., "PyTorch 1.9", "TensorFlow 2.0") were mentioned. Tools like BERT, SVM, and Adam optimiser are named but without versioning.
Experiment Setup Yes The resulting contextualised representations are therefore passed to an MLP with a single hidden layer (200 hidden units, Re LU activation), before predicting NER tags. We train the MLP over 5 epochs using the Adam optimiser (Kingma and Ba 2014).