How to Train Your Agent to Read and Write

Authors: Li Liu, Mengge He, Guanghui Xu, Mingkui Tan, Qi Wu13397-13405

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our DRAW network outperforms considered baselines and several state-of-the-art methods on AGENDA and M-AGENDA datasets.
Researcher Affiliation Academia Li Liu,1,2 Mengge He,1 Guanghui Xu,1 Mingkui Tan,1,4 Qi Wu3 1 School of Software Engineering, South China University of Technology, 2 Pazhou Laboratory, 3 University of Adelaide, 4 Key Laboratory of Big Data and Intelligent Robot, Ministry of Education
Pseudocode No The paper describes the proposed method in detail with equations and textual explanations but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block formatted like code.
Open Source Code Yes Our code and supplementary are released at https://github.com/menggehe/DRAW.
Open Datasets Yes AGENDA dataset. AGENDA is one of the most popular KGs-to-text datasets, which concludes 40,000 pair samples collected from the proceedings of 12 top AI conferences.
Dataset Splits No The paper mentions using the AGENDA dataset and creating the M-AGENDA dataset but does not provide specific train/validation/test split percentages or sample counts for these datasets required for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU models, or memory specifications.
Software Dependencies No The paper mentions 'We implement our method with Py Torch' but does not specify the version number of PyTorch or any other software dependencies.
Experiment Setup Yes For the Reader, we first use Trans E (Bordes et al. 2013) to train entity and relation embeddings. We then aggregate information passed from a 2-hop neighborhood to update the embedding of each node. Following (Nathani et al. 2019), we use Adam optimization with an initial learning rate of 0.1. For the Writer, we pre-train for 30 epochs with early stopping. Following (Ribeiro et al. 2020), we use Adam optimization with an initial learning rate of 0.5. To ensure the generation effect, we set the maximum generation length to 430. For the Reviewer, we pre-train the adversarial module with SGD optimization and initialize a learning rate of 0.001. Writer-Reviewer obtains the best results with λAR = λMR = 2. We set the trade-off parameter λRL = 1.