An Efficient Algorithm To Compute Distance Between Lexicographic Preference Trees

Authors: Minyi Li, Borhan Kazimipour

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the proposed approach with a baseline technique called Exhau which performs an exhaustive query over all possible outcome pairs. We generate random LP-trees by varying the number of attributes, the structure of the trees, and the local attribute preferences. In the first set of experiments, we consider N (number of variables) between 1 to 20 and run 1000 experiments on each number of attributes. All variables are binary. Figure 3a plots the average time spent by these algorithms against the number of attributes. We can see that Lp Dis runs significantly faster than the Exhau algorithm.
Researcher Affiliation Academia Minyi Li and Borhan Kazimipour Monash University, Australia minyi.li@monash.edu, borhan.kazimipour@monash.edu
Pseudocode Yes Algorithm 1: Lp Dis compute Kendall tau distance between two LP-trees
Open Source Code No The paper does not provide any information or link indicating the availability of open-source code for the described methodology.
Open Datasets No The paper states, 'We generate random LP-trees by varying the number of attributes, the structure of the trees, and the local attribute preferences.' This indicates the use of synthetically generated data without providing public access information.
Dataset Splits No The paper generates random LP-trees for experiments but does not specify any training, validation, or test dataset splits as would be typical for machine learning models. It describes experimental conditions for evaluating the algorithm's performance.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not provide specific names or version numbers for any ancillary software dependencies.
Experiment Setup Yes In the first set of experiments, we consider N (number of variables) between 1 to 20 and run 1000 experiments on each number of attributes. All variables are binary. ... In the second set of experiments, we limit the maximum number of parents of each attribute to 5, and the maximum number of nodes in an LP-tree to be the square of the number of attributes (N 2).