Learning Shuffle Ideals Under Restricted Distributions
Authors: Dongqu Chen
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the empirical direction, we propose a heuristic algorithm for learning shuffle ideals from given labeled strings under general unrestricted distributions. Experiments demonstrate the advantage for both efficiency and accuracy of our algorithm. |
| Researcher Affiliation | Academia | Dongqu Chen Department of Computer Science Yale University dongqu.chen@yale.edu |
| Pseudocode | No | The paper describes algorithms in prose, but does not contain explicitly labeled pseudocode or algorithm blocks with structured formatting. |
| Open Source Code | No | The paper makes no explicit statement about releasing source code, nor does it provide any links to a code repository or indicate code availability in supplementary materials for the described methodology. |
| Open Datasets | Yes | we conducted a series of experiments on a real world dataset [4] with string length n as a variable. |
| Dataset Splits | No | The paper mentions 'training sample set of size N' and discusses concepts like train, validation, and test sets generally, but does not provide specific split percentages, sample counts, or clear methodologies for data partitioning used in their experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper discusses different algorithmic approaches but does not provide specific names and version numbers for software dependencies, libraries, or solvers used in the experiments. |
| Experiment Setup | No | As this is a theoretical paper, we defer the details on the experiments to Appendix D, including the experiment setup and figures of detailed experiment results. |