Online Learning of Capacity-Based Preference Models

Authors: Margot Herin, Patrice Perny, Nataliya Sokolovska

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct numerical tests using synthetic preference data We generate preference data by randomly drawing sparse (with few non-null coefficients) normalized M obius vector m associated with monotonic capacities and pairs of alternatives xt, yt [0, 1]n. Then, after comparison of the perturbed overall values m, ϕ(xt) + ϵx and m, ϕ(yt) + ϵy (where ϵx is a centered Gaussian noise with standard error σ = 0.03), we obtain preference or indifference examples.
Researcher Affiliation Academia Margot Herin1 , Patrice Perny 1 , Nataliya Sokolovska2 1Sorbonne University, CNRS, LIP6, Paris, France 2 Sorbonne University, CNRS, LCQB, Paris, France
Pseudocode Yes Algorithm 1 Parameter: (γ, λ, T) 1: t 1, m1 (0, . . . , 0) 2: while t < T do ... Algorithm 2 Parameter: (γ, λ, ρ, T) 1: t 1, m1, µ1, z1 (0, . . . , 0) 2: while t < T do ...
Open Source Code Yes The code and the proofs not included in the paper are available at https://gitlab.com/margother/OPL.
Open Datasets No We generate preference data by randomly drawing sparse (with few non-null coefficients) normalized M obius vector m associated with monotonic capacities and pairs of alternatives xt, yt [0, 1]n. (The paper uses synthetic data generated by the authors, with no indication of public availability.)
Dataset Splits No The accuracy is computed as the average proportion of correctly predicted preferences within a test set containing 500 preference examples. (The paper mentions a test set size but does not provide specific training/validation/test splits.)
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were provided in the paper.
Software Dependencies No However, B gets sparser as n increases which allows us to resort to specialized libraries (e.g., scipy.sparse) for efficient matrix products in learning algorithms. (Only a library name without a version number is mentioned, and no other software dependencies with versions are listed.)
Experiment Setup Yes The L1-regularization parameter λ is set to 0.01 for both methods and for Algorithm 1, γ is set to 103. In Table 1 and 2 we compare the average accuracy and training times over 20 simulations of both methods for a growing number of criteria n. ... hyperparameters λ and γ are unchanged and ρ = 1 for Algorithm 2.