Parameter Learning for Log-supermodular Distributions
Authors: Tatiana Shpakova, Francis Bach
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 5, we illustrate our new results on a set of experiments in binary image denoising, where we highlight the flexibility of a probabilistic model for learning with missing data. The aim of our experiments is to demonstrate the ability of our approach to remove noise in binary images, following the experimental set-up of [9]. We consider the training sample of Ntrain = 100 images of size D = 50 50, and the test sample of Ntest = 100 binary images, containing a horse silhouette from the Weizmann horse database [3]. Results are presented in Table 1, where we compare the two types of decoding, as well as a structured output SVM (SVM-Struct [22]) applied to the same problem. |
| Researcher Affiliation | Academia | Tatiana Shpakova INRIA École Normale Supérieure Paris tatiana.shpakova@inria.fr Francis Bach INRIA École Normale Supérieure Paris francis.bach@inria.fr |
| Pseudocode | Yes | Input: functions fk, k = 1, . . . , K, and expected sufficient statistics fk(x) emp. R and x emp. [0, 1]D, regularizer Ω(t, α). Initialization: α = 0, t = 0 Iterations: for h from 1 to H Sample z RD as independent logistics Compute y = y (z, t, α) arg max y {0,1}D z y + t y PK k=1 αkf(y) Replace t by t Ch y x emp. + tΩ(t, α) Replace αk by αk Ch fk(x) emp. fk(y ) + αkΩ(t, α) Output: (α, t). |
| Open Source Code | No | No explicit statement providing access to the source code for the methodology described in the paper was found. There are no links to repositories or mentions of code in supplementary materials. |
| Open Datasets | Yes | We consider the training sample of Ntrain = 100 images of size D = 50 50, and the test sample of Ntest = 100 binary images, containing a horse silhouette from the Weizmann horse database [3]. |
| Dataset Splits | Yes | We consider the training sample of Ntrain = 100 images of size D = 50 50, and the test sample of Ntest = 100 binary images, containing a horse silhouette from the Weizmann horse database [3]. one parameter for t, one for α, both learned by cross-validation. |
| Hardware Specification | No | No specific hardware details such as CPU/GPU models, memory, or cloud instance types used for running the experiments were provided in the paper. |
| Software Dependencies | No | The paper mentions using 'graph-cuts [4]' but does not provide specific version numbers for this or any other software dependencies, which would be necessary for reproducibility. |
| Experiment Setup | Yes | We perform parameter inference by maximum likelihood using stochastic subgradient descent (over the logistic samples), with regularization by the squared ℓ2-norm, one parameter for t, one for α, both learned by cross-validation. We apply stochastic subgradient descent for the difference of the two convex functions Alogistic to learn the model parameters and use fixed regularization parameters equal to 10 2. |