Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

A Computationally Efficient Method for Learning Exponential Family Distributions

Authors: Abhin Shah, Devavrat Shah, Gregory Wornell

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our work is theoretical in nature.
Researcher Affiliation Academia Abhin Shah MIT EMAIL Devavrat Shah MIT EMAIL Gregory W. Wornell MIT EMAIL
Pseudocode Yes Algorithm 1: Projected Gradient Descent
Open Source Code No The paper states that it is theoretical in nature and does not conduct experiments. Therefore, it does not provide open-source code for its methodology.
Open Datasets No The paper states that it is theoretical in nature and does not conduct experiments. It does not mention using any datasets, public or otherwise, for training or evaluation.
Dataset Splits No The paper states that it is theoretical in nature and does not conduct experiments. It does not provide any training/test/validation dataset splits.
Hardware Specification No The paper states that it is theoretical in nature and does not conduct experiments. Therefore, it does not describe any specific hardware used.
Software Dependencies No The paper states that it is theoretical in nature and does not conduct experiments. Therefore, it does not provide specific software dependencies or version numbers.
Experiment Setup No The paper states that it is theoretical in nature and does not conduct experiments. Therefore, it does not provide specific experimental setup details like hyperparameters or training settings.