Doubly-Competitive Distribution Estimation
Authors: Yi Hao, Alon Orlitsky
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6. Numerical Experiments The estimator is easy to implement. In Section 1 of the supplemental material, we present experimental results on a variety of distributions, and show that the proposed estimator indeed outperforms the improved Good-Turing estimator in (Orlitsky & Suresh, 2015). |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, University of California, San Diego, USA. |
| Pseudocode | No | The paper describes the estimator's construction with mathematical formulas and rules in Section 5, but it is not presented in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository. |
| Open Datasets | No | The paper discusses distribution estimation from samples but does not mention the use of any specific publicly available or open datasets by name, nor does it provide access information (links, DOIs, or formal citations) for any dataset used in its numerical experiments. |
| Dataset Splits | No | The paper does not provide specific details on dataset splits (e.g., percentages or sample counts for training, validation, or testing), nor does it describe a cross-validation setup for its numerical experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run its numerical experiments (e.g., GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library or solver names with version numbers, that would be needed to replicate the numerical experiments. |
| Experiment Setup | No | The paper briefly mentions 'numerical experiments' but does not provide specific experimental setup details such as hyperparameters, training configurations, or system-level settings. |