Generating Interactive Worlds with Text

Authors: Angela Fan, Jack Urbanek, Pratik Ringshia, Emily Dinan, Emma Qian, Siddharth Karamcheti, Shrimai Prabhumoye, Douwe Kiela, Tim Rocktaschel, Arthur Szlam, Jason Weston1693-1700

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we investigate a machine learning approach for world creation using content from the multiplayer text adventure game environment LIGHT (Urbanek et al. 2019). ... We show that the game environments created with our approach are cohesive, diverse, and preferred by human evaluators compared to other machine learning based world construction algorithms.
Researcher Affiliation Collaboration Angela Fan, 1,2 Jack Urbanek, 1 Pratik Ringshia,1 Emily Dinan,1 Emma Qian,1 Siddharth Karamcheti,1 Shrimai Prabhumoye,1 Douwe Kiela,1 Tim Rockt aschel,1,3 Arthur Szlam,1 Jason Weston1 1Facebook AI Research 2LORIA, Nancy 3University College London
Pseudocode No No structured pseudocode or algorithm blocks are present. Section 2.7 is titled 'Proposed Algorithm for World Generation' but describes the steps in prose rather than formatted pseudocode.
Open Source Code No No explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In this work, we present a machine learning (ML) approach to creating a cohesive and interesting world built from elements of the text-based fantasy game environment LIGHT (Urbanek et al. 2019). These crowd-sourced elements, including descriptions of locations, characters, and objects, provide a rich source of supervision for learning common-sense relationships.
Dataset Splits Yes We partitioned this into a training, validation, and test set such that the locations are distinct in each set (see Table 2). ... Split Train Valid Test Locations 914 109 110 Characters 529 305 305 Objects 359 318 256 Object Containers 359 318 256 (Table 2: Dataset Statistics for World Generation)
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running experiments are provided. The paper does not mention any hardware specifications.
Software Dependencies No No specific software dependencies with version numbers are provided. The paper mentions using 'Starspace', 'fasttext', 'BERT-based models', and 'Transformer' but does not specify their versions or other software environments.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings. It mentions 'using input dropout to prevent overfitting was crucial for good performance' but without specifying the rate.