Preferences Single-Peaked on a Tree: Sampling and Tree Recognition

Authors: Jakub Sliwinski, Edith Elkind

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our algorithm empirically; to this end, we develop a procedure to uniformly sample preferences that are singlepeaked on a given tree.
Researcher Affiliation Academia 1ETH Zurich 2University of Oxford jsliwinski@ethz.ch, elkind@cs.ox.ac.uk
Pseudocode Yes Algorithm 1 Sample v single-peaked on T with v[1] = c; Algorithm 2 Sample v single-peaked on T with v[1] = c; Algorithm 3 Compute Pr(v[1] = k) for k = 1, . . . , m given the tree T = ({1, . . . , m}, E); Algorithm 4 Build attachment digraph D = (C, A) of V; Algorithm 5 Guess T, given a profile V with 2mα votes; Algorithm 6 Try to sample a V such that E1,V E2,V holds for given c, p and f
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes generating data through a sampling algorithm for preferences single-peaked on a tree (U(T)), which is not an external, publicly available dataset. It does not provide access information for such a dataset.
Dataset Splits No The paper does not specify traditional training/validation/test dataset splits. It discusses sampling votes for tree identification but not data partitioning for model validation in the machine learning sense.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup No The paper describes general experimental parameters like the number of trials (2000) and types of trees, but it does not include specific hyperparameters or detailed system-level training settings for the algorithms used in the experiments.