Making CP-Nets (More) Useful
Authors: Thomas Allen
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We employed an experimental approach to evaluate our algorithm, simulating the query process using randomly generated CP-nets. For the conference paper, I planned and performed additional experiments, including a series that used statistical sampling when exhaustive analysis was infeasible. In a series of experiments, I showed that most flipping sequences are likely to be short and that a longest flipping sequence (the diameter of the CP-net) is also likely to be short, at least in the case of randomly generated CP-nets, I further showed that, even if very long flipping sequences do occasionally exist in learned or elicited CP-nets, they are unlikely to provide useful information about the actual preferences of a human subject if there is some small probability ϵ that the rules in the CPTs are noisy. I am planning new experiments to test this assumption. My next project involves extending my earlier work (Allen, 2013) on the expected length of flipping sequences and the diameter of a CP-net. For that I plan to perform a further series of experiments involving 1. randomly generated networks with features that vary in the number of values (rather than domains of uniform size), 2. networks with particular structures in which long flipping sequences may be more common (e.g., chain-shaped CP-nets), and 3. CP-nets learned from data as well as those that are generated randomly. |
| Researcher Affiliation | Academia | Thomas E. Allen University of Kentucky Department of Computer Science 329 Rose Street Lexington, Kentucky 40506-0633 www.cs.uky.edu/ teal223/ thomas.allen@uky.edu |
| Pseudocode | No | The paper describes an algorithm verbally but does not include any structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide any links or explicit statements indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper states that experiments used "randomly generated CP-nets" and that the author "modified the algorithm that randomly generated CP-nets." This indicates synthetic data was created, not a pre-existing public dataset for which access information would be provided. |
| Dataset Splits | No | The paper mentions "non-training preference comparisons" but does not specify exact dataset split percentages, sample counts for train/validation/test, or refer to predefined splits from external sources, making reproduction of data partitioning difficult without further details. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for conducting the experiments. |
| Software Dependencies | No | The paper does not provide specific names and version numbers for any software libraries, frameworks, or solvers used in the experiments. |
| Experiment Setup | No | The paper describes aspects of the algorithm and data generation (e.g., "bound on in-degree", "randomly generated CP-nets"), but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size), model initialization, or specific training configurations. |