Instance Based Approximations to Profile Maximum Likelihood
Authors: Nima Anari, Moses Charikar, Kirankumar Shiragur, Aaron Sidford
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide preliminary experiments in which we perform entropy estimation using the Pseudo PML approach implemented using our simpler rounding algorithm. Our results match other state-of-the-art estimators for entropy, some of which are property specific. |
| Researcher Affiliation | Academia | Nima Anari Stanford University anari@stanford.edu Moses Charikar Stanford University moses@cs.stanford.edu Kirankumar Shiragur Stanford University shiragur@stanford.edu Aaron Sidford Stanford University sidford@stanford.edu |
| Pseudocode | Yes | Algorithm 1 Approximate PML(φ, R) and Algorithm 2 Approximate PML2(φ, R) are presented with clear, numbered steps. |
| Open Source Code | No | The paper mentions using external tools (CVX[GB14] with package CVXQUAD[FSP17]) for implementation, but it does not explicitly state that its own source code is open or provide any links to a repository for the methodology described. |
| Open Datasets | No | The paper refers to generating data from |
| Dataset Splits | No | The paper discusses 'sample size' in the context of experiments but does not provide specific details on train/validation/test dataset splits or cross-validation setups. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | In our implementation we use CVX[GB14] with package CVXQUAD[FSP17] to solve the convex program. However, specific version numbers for these software packages are not provided. |
| Experiment Setup | No | The paper describes algorithms and theoretical guarantees, but it does not provide specific experimental setup details such as hyperparameter values, optimizer settings, or training schedules. |