Learning Compositional Tasks from Language Instructions

Authors: Lajanugen Logeswaran, Wilka Carvalho, Honglak Lee

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments Tasks and Dataset We use the AI2Thor (Kolve et al. 2017) environment as a testbed for our experiments.
Researcher Affiliation Collaboration Lajanugen Logeswaran1, Wilka Carvalho2, Honglak Lee1,2 1LG AI Research 2University of Michigan, Ann Arbor
Pseudocode No The paper includes a diagram (Figure 2) illustrating the approach but does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets No The paper states, "We use Amazon Mechanical Turk to collect natural language descriptions of tasks for training and evaluation," for a newly constructed task setup but does not provide concrete access information (link, DOI, or citation with author/year for the specific dataset created).
Dataset Splits No The paper states, "Four text descriptions of each task type are part of the training set and the remaining descriptions (i.e., 1 per task type) are part of the test set," but does not explicitly mention or detail a validation set split.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions using "deep Q-learning" and "double DQN algorithm" but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers.
Experiment Setup Yes Word embeddings and the RNN have representation size 32. Objects are represented by embeddings of size 32 from an embedding table. The CNN observation features have size 512 and the CNN encoder has 1.7M parameters, which constitues 90% of the overall model parameters. The MLPs in Equations (5) and (6) are single hidden layer MLPs with 256 hidden units and Re LU activation.