Learning Preference Models with Sparse Interactions of Criteria

Authors: Margot Herin, Patrice Perny, Nataliya Sokolovska

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 presents some numerical tests to evaluate the performance of the proposed approach both in terms of computation time and generalizing performances.
Researcher Affiliation Academia Margot Herin1 , Patrice Perny 1 , Nataliya Sokolovska2 1Sorbonne University, CNRS, LIP6, Paris, France 2 Sorbonne University, CNRS, LCQB, Paris, France
Pseudocode No The paper describes algorithmic steps and propositions but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. No repository link or explicit code release statement found.
Open Datasets No In this section we present the results of numerical tests performed on synthetic preference data. Preference data are generated through randomly drawn sparse M obius vectors m (verifying monotonicity constraints) and utilities vectors x, y are uniformly drawn within [0, 1]n.
Dataset Splits No We set the size of the training sets to |P| + |I| = 500 and of the test sets to |P| = 1000. No explicit mention of a validation set split.
Hardware Specification Yes All tests are conducted on a 2.8 GHz Intel Core i7 processor with 16GB RAM and we used the mathematical programming Gurobi solver (version 9.1.2).
Software Dependencies Yes All tests are conducted on a 2.8 GHz Intel Core i7 processor with 16GB RAM and we used the mathematical programming Gurobi solver (version 9.1.2).
Experiment Setup Yes The regularization parameter λ is set to λ = 1. For the D-IRLS method, the smoothing parameter is set to η = 10 50 and the algorithm terminates when m(k+1) m(k) 2 10 3. Also, coefficients with absolute values smaller than 10 5 are discarded at each iteration.