Learning Partial Lexicographic Preference Trees over Combinatorial Domains

Authors: Xudong Liu, Miroslaw Truszczynski

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce partial lexicographic preference trees (PLP-trees) as a formalism for compact representations of preferences over combinatorial domains. Our main results concern the problem of passive learning of PLP-trees. Specifically, for several classes of PLP-trees, we study how to learn (i) a PLPtree consistent with a dataset of examples, possibly subject to requirements on the size of the tree, and (ii) a PLP-tree correctly ordering as many of the examples as possible in case the dataset of examples is inconsistent. We establish complexity of these problems and, in all cases where the problem is in the class P, propose polynomial time algorithms. For the future research, we will develop good heuristics for our learning algorithms. We will implement these algorithms handling issues of, in general, finite domains of values, and evaluate them on both synthetic and real-world preferential datasets.
Researcher Affiliation Academia Xudong Liu and Miroslaw Truszczynski Department of Computer Science University of Kentucky Lexington, KY USA {liu,mirek}@cs.uky.edu
Pseudocode Yes Algorithm 1: Procedure learn UI that learns a UI tree. Algorithm 2: The recursive procedure learn CI that learns a CI PLP-tree.
Open Source Code No The paper states in its future work section that the authors 'will implement these algorithms', implying the code is not currently open-source or available.
Open Datasets No The paper is a theoretical work focusing on the complexity of learning problems for PLP-trees. It does not describe any experiments that use a dataset, and its 'Future Work' section explicitly states that the authors 'will implement these algorithms... and evaluate them on both synthetic and real-world preferential datasets,' indicating that dataset usage is planned for future work.
Dataset Splits No The paper is theoretical and does not describe experiments involving datasets or their splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and focuses on algorithm design and complexity analysis, not empirical evaluation. Therefore, it does not specify any hardware used for experiments.
Software Dependencies No The paper is theoretical and focuses on algorithm design and complexity analysis. It does not describe experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and complexity analysis. It does not describe experiments or their setup, including hyperparameters or system-level training settings.