FlashNormalize: Programming by Examples for Text Normalization
Authors: Dileep Kini, Sumit Gulwani
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that we are able to effectively learn desired programs for a variety of normalization tasks. ... Table 4: Experimental results for learning number-translators. ... Table 5: Experiments for learning other normalization tasks. |
| Researcher Affiliation | Collaboration | Dileep Kini University of Illinois (UIUC) Urbana, Illinois 61820 Sumit Gulwani Microsoft Research Redmond, Washington 98052 |
| Pseudocode | Yes | In Algorithm 1 we provide the pseudocode for the procedure Learn Program which learns decision lists for a given set of examples. ... Algorithm 2: Learning conjunctive predicates that pick most examples in P and discard all in N. ... Algorithm 3: Enumerating subsets of examples which are C-consistent. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the release of open-source code for the described methodology. |
| Open Datasets | No | The paper mentions generating training examples with an active learning method, but it does not provide concrete access information (link, DOI, citation) to a publicly available dataset used for training or testing. |
| Dataset Splits | No | The paper discusses 'training examples' and 'test examples' but does not specify a 'validation' split or provide specific percentages or counts for any data splits to reproduce the partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for reproducibility. |
| Experiment Setup | No | The paper describes an active learning strategy for generating examples and the overall synthesis algorithms, but it does not provide specific experimental setup details such as hyperparameters (e.g., learning rates, batch sizes) or system-level training settings. |