Automatic Synthesis of Smart Table Constraints by Abstraction of Table Constraints

Authors: Baudouin Le Charlier, Minh Thanh Khong, Christophe Lecoutre, Yves Deville

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate its compression efficiency on many constraint cases while showing its reasonable execution time. It is then shown that using filtering algorithms on the resulting smart table is more efficient than using state-of-the-art filtering algorithms on the initial table. To show the practical interest of the algorithm described in this paper, we have conducted an experimentation using some well-known global constraints
Researcher Affiliation Academia Baudouin Le Charlier1, Minh Thanh Khong1, Christophe Lecoutre2, Yves Deville1 1 Universit e catholique de Louvain, Belgium 2 CRIL-CNRS, Universit e d Artois, France {baudouin.lecharlier, minh.khong, yves.deville}@uclouvain.be, lecoutre@cril.fr
Pseudocode No The paper describes the algorithm steps in textual form but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., repository links, explicit statements of code release, or mention of supplementary materials) for the source code of the described methodology.
Open Datasets No The paper states that for a given global constraint 'glb', arity 'n' and maximum domain size 'd', they 'generate a table (constraint) containing all tuples accepted by the global constraint glb-n-d.' and also 'generate random smart table constraints random-n-d... Once a random smart table is generated, we build the corresponding ordinary table.' This indicates data is generated based on constraint definitions rather than being sourced from publicly available datasets with concrete access information.
Dataset Splits No The paper describes generating tables from global constraints and then iteratively running executions where 10% of values are randomly removed. It does not provide specific dataset split percentages, sample counts, or predefined splits for training, validation, and testing required for reproducibility.
Hardware Specification No The paper does not provide any specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific solvers with their versions) that would be needed to replicate the experiment.
Experiment Setup Yes for each algorithm, we iteratively run its execution and randomly removed 10% of the values (until a failure occurs). This way, many different call contexts were simulated. This inner process was repeated 1, 000 times, and we additionally took the average time over 10 executions. Using the same seed, the different filtering algorithms are all tested on the same search trees.