ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

Authors: Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, Yejin Choi3027-3035

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
Researcher Affiliation Collaboration Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, USA Allen Institute for Artificial Intelligence, Seattle, USA
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a link for the ATOMIC atlas itself (https://homes.cs.washington.edu/ msap/atomic/), but it does not explicitly state that the source code for the methodology or models described in the paper is available at this link or elsewhere.
Open Datasets Yes We present ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. ... available to download or browse at https://homes.cs.washington.edu/ msap/atomic/.
Dataset Splits Yes To test our models, we split seed events into training, validation, and test sets (80%/10%/10%), ensuring that events that share the same first two content words are in the same set.
Hardware Specification No The paper states that "Experiments were conducted on the Allen AI Beaker platform," but it does not provide specific hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper states, "All our experiments were run using Allen NLP (Gardner et al. 2017)," but it does not specify version numbers for Allen NLP or any other key software components (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes During experiments, we use the 300-dimensional Glo Ve pretrained embeddings... augmented these embeddings with 1024-dimensional ELMo pre-trained embeddings... We set the encoder and decoder hidden sizes to henc = 100 and hdec = 100.