Exact Learning of Preference Structure: Single-peaked Preferences and Beyond
Authors: Sonja Kraiczy, Edith Elkind
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To test the dependence of the sample size on p, and to see how well the lower bound in Proposition 3.9 performs, we ran experiments with m = 1000 alternatives (we choose a relatively large value of m for robustness; as argued above the results are likely to be very similar for other values of m), varying p as p {0.001, 0.002, . . . , 0.5}. Figure 1 shows the average sample size over 1000 repetitions needed to identify the underlying axis from Up( ) depending on p(1 p) (red), together with a plot of p(1 p) (blue), in log-log scale. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Oxford, Oxford, United Kingdom. Correspondence to: Edith Elkind <edith.elkind@ox.ac.uk>. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes experiments involving sampling from theoretical distributions to verify bounds, not training models on traditional datasets or providing access to such datasets. |
| Dataset Splits | No | The paper does not specify any training/validation/test dataset splits. The experiments are simulations to verify theoretical bounds on sample size. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are provided in the paper. |
| Software Dependencies | No | The paper does not provide specific names or version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | To test the dependence of the sample size on p, and to see how well the lower bound in Proposition 3.9 performs, we ran experiments with m = 1000 alternatives (we choose a relatively large value of m for robustness; as argued above the results are likely to be very similar for other values of m), varying p as p {0.001, 0.002, . . . , 0.5}. Figure 1 shows the average sample size over 1000 repetitions needed to identify the underlying axis from Up( ) depending on p(1 p) (red), together with a plot of p(1 p) (blue), in log-log scale. |