Natural Language Decomposition and Interpretation of Complex Utterances

Authors: Harsh Jhamtani, Hao Fang, Patrick Xia, Eran Levy, Jacob Andreas, Benjamin Van Durme

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the proposed approach enables the interpretation of complex utterances with almost no complex training data, while outperforming standard few-shot prompting approaches.
Researcher Affiliation Industry Harsh Jhamtani , Hao Fang , Patrick Xia , Eran Levy , Jacob Andreas and Benjamin Van Durme Microsoft {hjhamtani,hafang,patrickxia,erlevy,jaandrea,ben.vandurme}@microsoft.com
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code and De CU dataset will be available at https://github.com/microsoft/decomposition-of-complex-utterances
Open Datasets Yes To study such multi-step complex intent decomposition, we introduce a new dataset we call De CU (Decomposition of Complex Utterances). Code and De CU dataset will be available at https://github.com/microsoft/decomposition-of-complex-utterances
Dataset Splits No The paper specifies 'ten complex utterances... to be used as training data' and a 'test set consisting of the remaining 200 complex utterances', but it does not explicitly mention a separate validation split.
Hardware Specification No The paper mentions using 'Open AI s text-davinci-003 model', 'GPT-4 (gpt-4-32k)', and 'LLAMA-2-70B' as the LLMs but does not specify the hardware used to run these models or the experiments.
Software Dependencies No The paper mentions using specific LLM models like 'text-davinci-003' and 'GPT-4' but does not list other software dependencies with specific version numbers (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes The model is prompted with K = 10 example decompositions... We use a maximum of M 25 additional elementary utterances... We use Open AI s text-davinci-003 model as the LLM...