Learning Knowledge Graph-based World Models of Textual Environments
Authors: Prithviraj Ammanabrolu, Mark Riedl
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A zero-shot ablation study on neverbefore-seen textual worlds shows that our methodology significantly outperforms existing textual world modeling techniques as well as the importance of each of our contributions. |
| Researcher Affiliation | Academia | Prithviraj Ammanabrolu School of Interactive Computing Georgia Institute of Technology raj.ammanabrolu@gatech.edu Mark O. Riedl School of Interactive Computing Georgia Institute of Technology riedl@cc.gatech.edu |
| Pseudocode | No | The paper includes architectural diagrams (Figure 2, Figure 3) and mathematical formulations of loss functions, but no explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper links to the Jericho World Dataset (https://github.com/Jericho World/Jericho World) but does not provide a concrete link or explicit statement about the open-sourcing of their Worldformer model's code. |
| Open Datasets | Yes | Dataset. We use the Jericho World Dataset [4].1 1https://github.com/Jericho World/Jericho World |
| Dataset Splits | No | The paper mentions 'training data' and a 'test set' with specific instance counts, but does not explicitly state the existence or size of a validation split or how it was created. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions various software components and models like BERT, GPT-2, ALBERT, Open IE, and Word Net, but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | All sequence models use a fixed graph vocabulary of size 7002... Additional details and hyperparameters for the models are found in Appendix A.2. |