On the Information Theoretic Limits of Learning Ising Models
Authors: Rashish Tandon, Karthikeyan Shanmugam, Pradeep K Ravikumar, Alexandros G Dimakis
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i.i.d. samples. While there have been recent results for specific graph classes, these involve fairly extensive technical arguments that are specialized to each specific graph class. In contrast, we isolate two key graph-structural ingredients that can then be used to specify sample complexity lower-bounds. |
| Researcher Affiliation | Academia | Karthikeyan Shanmugam1 , Rashish Tandon2 , Alexandros G. Dimakis1 , Pradeep Ravikumar2 1Department of Electrical and Computer Engineering, 2Department of Computer Science The University of Texas at Austin, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information for source code related to the methodology described. |
| Open Datasets | No | The paper is theoretical and focuses on lower bounds; it does not use or provide access information for any datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not involve experimental data; thus, no dataset split information for validation is provided. |
| Hardware Specification | No | The paper describes theoretical work and does not report on experiments, thus no hardware specifications are mentioned. |
| Software Dependencies | No | The paper describes theoretical work and does not report on experiments, thus no specific ancillary software details with version numbers are mentioned. |
| Experiment Setup | No | The paper describes theoretical work and does not report on experiments; thus, no specific experimental setup details like hyperparameters or training configurations are provided. |