Constraints First: A New MDD-based Model to Generate Sentences Under Constraints
Authors: Alexandre Bonlarron, Aurélie Calabrèse, Pierre Kornprobst, Jean-Charles Régin
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results are presented in Sec. 4, in which we show the potential of our method. At last, in Sec. 5, we share additional thoughts on this work and give some perspectives on future investigations. Finally, we conclude. |
| Researcher Affiliation | Academia | 1Universit e Cˆote d Azur, Inria, France 2Universit e Aix-Marseille, CNRS, LPC, France 3Universit e Cˆote d Azur, CNRS, I3S, France |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. The methods are described in narrative text and with diagrams. |
| Open Source Code | No | The model described in Sec. 3 was implemented in Java 17. The code is available upon request. |
| Open Datasets | No | For French, to build our corpus of n-grams, we started with 443 books belonging to the youth category. For English, to constitute our corpus, we build a set of 75 books fiction category. The paper does not provide specific access information (link, DOI, repository, or specific titles with clear attribution) for the exact corpus of books used to build their n-grams. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits for their n-gram corpus or the generated sentences. It describes an evaluation with sets of sentences but not formal data splits for model training/validation. |
| Hardware Specification | Yes | The experiments were performed on a machine using a Intel(R) Xeon(R) W-2175 CPU @ 2.50GHz with 256 GB of RAM and running under Ubuntu 18.04. |
| Software Dependencies | Yes | The model described in Sec. 3 was implemented in Java 17. The sentence selection task was performed with models without any fine-tuning by using either Open AI GPT-2 [Brown et al., 2020] for English sentences or a French trained GPT-2 [Simoulin and Crabb e, 2021]. These models are available from the huggingface library [Wolf et al., 2020]. |
| Experiment Setup | No | The paper describes its general experimental conditions and the process of n-gram and sentence generation, but it does not provide specific hyperparameters (e.g., learning rate, batch size, epochs, optimizer settings) for its MDD-based model or the process of using the pre-trained GPT-2 model beyond stating it was used without fine-tuning. |