Block Belief Propagation for Parameter Learning in Markov Random Fields

Authors: You Lu, Zhiyuan Liu, Bert Huang4448-4455

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove that the method converges to the same solution as that obtained by using full inference per iteration, despite these approximations, and we empirically demonstrate its scalability improvements over standard training methods. In this section, we empirically analyze the performance of BBPL. We design two groups of experiments.
Researcher Affiliation Academia You Lu Department of Computer Science Virginia Tech Blacksburg, VA you.lu@vt.edu Zhiyuan Liu Department of Computer Science University of Colorado Boulder Boulder, CO zhiyuan.liu@colorado.edu Bert Huang Department of Computer Science Virginia Tech Blacksburg, VA bhuang@vt.edu
Pseudocode Yes Algorithm 1 Parameter learning with full convex BP; Algorithm 2 Parameter estimation with block BP
Open Source Code No The paper does not provide concrete access to source code for the methodology described. There are no repository links or explicit statements about code availability.
Open Datasets Yes For our real data experiments, we use the scene understanding dataset (Gould, Fulton, and Koller 2009) for semantic image segmentation. Each image is 240 x 320 pixels in size. We randomly choose 50 images as the training set and 20 images as the test set.
Dataset Splits Yes For our real data experiments, we use the scene understanding dataset (Gould, Fulton, and Koller 2009) for semantic image segmentation. Each image is 240 x 320 pixels in size. We randomly choose 50 images as the training set and 20 images as the test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions general setups like 'on a GPU'.
Software Dependencies No The paper mentions using a 'fully convolutional network (FCN)' and fine-tuning parameters from a 'pretrained VGG 16-layer network', but it does not specify version numbers for these or any other software dependencies.
Experiment Setup No The paper describes some aspects of feature extraction and network architecture (e.g., 'unary features from a fully convolutional network (FCN)', 'pairwise features are based on those of Domke (2013)', 'discretize it to 10 bins'). However, it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings in the main text.