Linear and Parallel Learning of Markov Random Fields

Authors: Yariv Mizrahi, Misha Denil, Nando De Freitas

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we describe some experiments designed to show that the LAP estimator has good empirical performance. We focus on small models where exact maximum likelihood is tractable in order to allow performance to be measured.
Researcher Affiliation Academia Yariv Dror Mizrahi1 YARIV@MATH.UBC.CA Misha Denil2 MISHA.DENIL@CS.OX.AC.UK Nando de Freitas1,2,3 NANDO@CS.OX.AC.UK 1University of British Columbia, Canada 2University of Oxford, United Kingdom 3Canadian Institute for Advanced Research, CIFAR NCAP Program
Pseudocode Yes Algorithm 1 LAP Input: MRF with maximal cliques C for q C do Construct auxiliary MRF over the variables in Aq. Estimate parameters ˆαML of auxiliary MRF. Set ˆθq ˆαML q . end for
Open Source Code No The paper does not provide any specific links to source code repositories, nor does it state that the code is available in supplementary materials or will be released.
Open Datasets No The paper states: 'we choose the generating parameters uniformly at random from the interval [−1, 1] and draw samples approximately from the model.' This indicates the authors generated their own synthetic data for the experiments and do not provide information about its public availability.
Dataset Splits No The paper describes drawing samples from models and comparing estimates, but it does not explicitly mention train, validation, or test splits, or how the data was partitioned for these purposes.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instances) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters for the algorithms, learning rates, batch sizes, or optimizer configurations.