DiNADO: Norm-Disentangled Neurally-Decomposed Oracles for Controlling Language Models

Authors: Sidi Lu, Wenbo Zhao, Chenyang Tao, Arpit Gupta, Shanchan Wu, Tagyoung Chung, Nanyun Peng

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on formality control in machine translation and the lexically constrained generation task Common Gen demonstrates the significance of the improvements.
Researcher Affiliation Collaboration 1Department of Computer Science, University of California, Los Angeles 2Amazon AGI 3Samsung Research America; Work was done when Shanchan and Sidi were working at Amazon.
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Code: https://github.com/Plus Lab NLP/Di NADO
Open Datasets Yes Lexically Constrained Generation (LCG) task using the Common Gen dataset(Lin et al., 2020) and Formal MT with Fisher and CALLHOME Spanish-English Speech Translation Corpus dataset(Post et al., 2013).
Dataset Splits Yes The training set consists of 32,651 unique key concepts, which serve as constraints, and a total of 67,389 annotated description sequences. Additionally, a validation set containing 993 concepts and 4,018 description sequences is provided. To ensure a comprehensive evaluation, the dataset maintains an open leaderboard for benchmarking different approaches on a withheld test set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions models like 'GPT-2-Large' and 'Flan T5' but does not specify software dependencies with version numbers for libraries or frameworks used (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings in the main text.