Correct and Optimal: The Regular Expression Inference Challenge
Authors: Mojtaba Valizadeh, Philip John Gorinski, Ignacio Iacobacci, Martin Berger
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Building on this advance, we generate and publish the first large-scale datasets for REI, and devise and evaluate several initial heuristic and machine learning baselines. We invite the community to participate and explore ML methods that learn to solve REI problems. |
| Researcher Affiliation | Collaboration | Mojtaba Valizadeh1 , Philip John Gorinski2 , Ignacio Iacobacci2 and Martin Berger1,3 1University of Sussex 2Huawei Noah s Ark Lab, London 3Montanarius Ltd |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | All data and starter code to recreate our baselines is provided via Coda Lab [Pavao et al., 2022] on the REIC site: https://codalab.lisn.upsaclay.fr/competitions/15096 |
| Open Datasets | Yes | All data and starter code to recreate our baselines is provided via Coda Lab [Pavao et al., 2022] on the REIC site: https://codalab.lisn.upsaclay.fr/competitions/15096 |
| Dataset Splits | Yes | When splitting the four generated datasets into training and test data, we aim for a 90/10 split. [...] During training, we randomly split the combined training data into train and validation sets in a 90/10 split. |
| Hardware Specification | No | The paper mentions "implemented on GPUs" and "GPU-accelerated REI solver" but does not provide specific models or details about the hardware used for their experiments. |
| Software Dependencies | No | The paper states "All models are implemented in the Hugging Face transformers framework6 and use the GPT-2 architecture with a total of 300M parameters." While it mentions the framework and architecture, it does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper states "all training hyperparameters are given in the technical appendix." It does not provide specific hyperparameters or detailed system-level training settings within the main text. |