Limited Discrepancy AND

Authors: Javier Larrosa, Emma Rollon, Rina Dechter

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we report results comparing LDS vs LDSAO as any-time schemes in the min-sum problem of Graphical Models. and Figure 3 shows any-time plots for one instance from each benchmark (note the logarithmic scale of time).
Researcher Affiliation Academia Javier Larrosa, Emma Rollon UPC Barcelona Tech Barcelona, Spain Rina Dechter University of California, Irvine Irvine, California, USA
Pseudocode Yes Algorithm 1: LDS and Algorithm 2: LDSAO
Open Source Code No The paper does not include an unambiguous statement or a direct link to the source code for the LDSAO methodology described.
Open Datasets Yes Instances have been taken from http://genoweb.toulouse.inra.fr/ degivry /evalgm and http://bioinfo.cs.technion.ac.il/superlink.
Dataset Splits No The paper focuses on optimization algorithms run on problem instances and does not describe explicit training, validation, or test dataset splits in the typical machine learning sense.
Hardware Specification No The paper mentions 'CPU time' as a metric but does not provide any specific hardware details such as GPU/CPU models or system configurations used for the experiments.
Software Dependencies No The paper mentions the Mini-Bucket-Elimination heuristic but does not provide specific software names with version numbers (e.g., libraries, solvers, or programming language versions) used for the experiments.
Experiment Setup Yes In the experiments the two algorithms ran with the same i-bound (10 for all benchmarks except for Linkage and Type4 pedigree where it was set to 15 and 16, respectively).