Entropy-Penalized Semidefinite Programming

Authors: Mikhail Krechetov, Jakub Marecek, Yury Maximov, Martin Takac

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the practical efficiency of our approach on several combinatorial optimization and machine learning problems. In a case study, we focus on certain combinatorial optimization problems and inference in graphical models. We show that despite the non-convexity of the penalized problem, our approach successfully recovers rank-one solutions in practice. We compare our solutions against non-penalized SDP, belief propagation, and state-of-the-art branch-and-bound solvers of [Krislock et al., 2014]. Also, the paper includes tables such as 'Table 1: Results for the Biq Mac collection' and 'Table 2: Results for the GSET collection', along with figures illustrating performance.
Researcher Affiliation Collaboration Mikhail Krechetov1 , Jakub Mareˇcek2 , Yury Maximov1,3 and Martin Takáˇc4 1 Skolkovo Institute of Science and Technology, Nobel Street 3, Moscow, Russia 2 IBM Research Ireland, Technology Campus Damastown, Dublin D15, Ireland 3 Los Alamos National Laboratory, MS-B284, Los Alamos, NM 87545, USA 4 Lehigh University, 200 West Packer Avenue, Bethlehem, PA 18015, USA
Pseudocode Yes Algorithm 1: Entropy-Penalized SDP.
Open Source Code Yes 1See https://github.com/mkrechetov/epsdp for the implementation details.
Open Datasets Yes selected hard MAP inference problems from the Biq Mac collection2. and the Max-Cut problem over a GSET collection of sparse graphs3. Footnote 2: http://biqmac.uni-klu.ac.at/biqmaclib.html. Footnote 3: https://sparse.tamu.edu/Gset.
Dataset Splits No The paper mentions using 'Biq Mac collection' and 'GSET collection' datasets, and also 'Erdos-Renyi random graphs', but does not provide specific details on how these datasets were split into training, validation, or test sets, nor does it refer to predefined splits with citations.
Hardware Specification No The paper discusses run times and computational efficiency but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions software like CVXPY, Gurobi, and UGM solver but does not provide specific version numbers for these software components, which are required for reproducible descriptions of ancillary software.
Experiment Setup Yes We fix the width of factorization to k = 10, since there is no significant gain in practice for larger values of k, cf. [Mei et al., 2017]. We choose 2ηkβ = 1, where β is the Lipschitz constant of the gradient in ℓ2 norm and γ = 3/2. Parameters λ0 and γ of Algorithm ?? are usually chosen by a few iterations of random search. It is usually enough to have about 35 iterations for penalty updates and a few hundred iterations to find a local minimum using Algorithm ??. In our study, parameter α of entropies ST α , SN α , and SR α is chosen on an exponential grid from 1 to 10 with a step 1.1. After experimentation, we note that α = 1.1 and α = 5.0 seem to improve the results the best for the low-rank SDP with Tsallis and Renyi entropies, respectively.