Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Novel Integer Linear Programming Approach for Global L0 Minimization
Authors: Diego Delle Donne, Matthieu Kowalski, Leo Liberti
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct computational experimentations intending to test the potential of the approaches presented in this work to tackle problem P0/p. In particular, we compare the performance of solving P0/p by the three following approaches: 1. By iteratively solving formulation MIP0/p... 2. By using the above algorithm but solving the successive formulations of MIP0/p with the branch-and-cut algorithm proposed in Section 3... 3. By solving IPcov 0/p with Algorithm 3... |
| Researcher Affiliation | Academia | Diego Delle Donne EMAIL Information Systems, Decision Sciences and Statistics department ESSEC Business School 95021 Cergy-Pontoise Cedex, France Matthieu Kowalski EMAIL Inria Saclay-ˆIle-de-France Universit e Paris-Saclay, CNRS, Laboratoire Interdisciplinaire des Sciences du Num eriques 91192 Gif-sur-Yvette Cedex, France Leo Liberti EMAIL LIX CNRS, Ecole Polytechnique Institut Polytechnique de Paris 91128 Palaiseau, France |
| Pseudocode | Yes | Our procedure, shown in Algorithm 1, starts by looking for a set of columns J [m] such that P j J ˆbj < 1... Our implementation, shown in Algorithm 2, we consider the complement J of the forbidden support J and we iteratively move columns from J to J until achieving maximality of J... The procedure, shown in Algorithm 3, starts with a relaxed version of IPcov 0/p with an (initially empty) subset J of constraints (10), i.e., a combinatorial relaxation. |
| Open Source Code | No | The paper does not provide explicit access to source code for the methodology described in this paper. It mentions that the approaches were implemented in Java using the commercial solver CPLEX. |
| Open Datasets | Yes | As a first test-bed, we resort to the sparse deconvolution instances used in Bourguignon et al. (2016)... we resort to a pathological example from the literature, namely the adversarial strategy introduced in (Mairal and Yu, 2012). |
| Dataset Splits | No | The paper describes how instances were generated or used (e.g., '50 instances for each considered SNR and each different value K', 'We generate 10 instances for each size of H... and each value of K'), but it does not specify explicit training/test/validation splits for these instances in the context of model training or evaluation, as the problems are solved directly. |
| Hardware Specification | Yes | The computational framework is an Intel Xeon E5-2620 processor clocked at 2.1GHz and limited to 8 GB of RAM. |
| Software Dependencies | Yes | All the above approaches were implemented in Java using the general-purpose solver CPLEX (version 12.6) through its Java API to solve the MIP/IP formulations. |
| Experiment Setup | Yes | White noise is added with a variable signal-to-noise ratio (SNR). The authors created 50 instances for each considered SNR and each different value K {5, 7, 9}... we randomly generate a sparse solution x with a given support size K and then calculate the vector y := Hx+ε, where ε represents a gaussian noise. We set afterwards α := y Hx p. We set a time limit of 1800 seconds. |