On Learning Ising Models under Huber's Contamination Model
Authors: Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We corroborate our theoretical results by simulations. |
| Researcher Affiliation | Academia | Machine Learning Department Department of Statistics and Data Science Carnegie Mellon University Pittsburgh, PA 15213 |
| Pseudocode | Yes | Algorithm 1 Robust1DMean Robust univariate mean estimator |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | No | The paper describes synthetic experiments where graphs are constructed with varying parameters, but it does not use or provide access information for a publicly available or open dataset. |
| Dataset Splits | No | The paper describes synthetic experiments and simulations, but it does not specify train, validation, and test dataset splits as typically done for machine learning experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library names, solvers, frameworks). |
| Experiment Setup | Yes | We generate our plots in the following manner: first we construct two graphs with the same structure either from Gclique p,d of Gstar p,d . We instantiate parameters for the first graph with θ(1) with model width ω and then vary the parameters for the second graph as θ(2) = θ(1) i 25 for i ranging from 1 to 50. We vary p {12, 15}, d {3 : 8 : 1} and ω {0.2 : 1.0 : 0.2} {1.5 : 10 : 0.5} where {a : b : c} denotes values between a and b (both inclusive) with consecutive values differing by c. |