Learning GAI-Decomposable Utility Models for Multiattribute Decision Making

Authors: Margot Herin, Patrice Perny, Nataliya Sokolovska

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical tests are performed to demonstrate the practical efficiency of the learning approach. This section presents the results of numerical tests performed on synthetic and real-world preference data.
Researcher Affiliation Academia Margot Herin1, Patrice Perny1, Nataliya Sokolovska2 1LIP6, Sorbonne University, Paris 2LCQB, Sorbonne University, Paris
Pseudocode No The paper contains mathematical formulations and derivations, but it does not include any clearly labeled pseudocode or algorithm blocks describing the proposed method's steps.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use Employee Selection (ESL)... Lecture Evaluation (LEV)... Employee Rejection/Acceptance (ERA)1... Then from the UCI repository, we use CPU and Car MPG (MPG)... Finally, we use the Movehub city ranking2 (CITY) dataset... 1www.openml.org (ESL, LEV and ERA) 2www.kaggle.com/datasets/blitzr/movehub-city-rankings
Dataset Splits Yes Each dataset is split to produce a training set containing 80% of the examples and a test set with the 20% left. the regularization hyperparameters 𝐶and 𝜆are selected by cross-validation using a number of folds equal to 3.
Hardware Specification Yes All tests are conducted on a 2.8 GHz Intel Core i7 processor with 16GB RAM and we used the mathematical programming Gurobi solver (version 9.1.2).
Software Dependencies Yes All tests are conducted on a 2.8 GHz Intel Core i7 processor with 16GB RAM and we used the mathematical programming Gurobi solver (version 9.1.2).
Experiment Setup Yes We implement our method, called SMKGAI for Sparse Multiple Kernel GAI, with the Gaussian RBF kernel using 𝜎= 1. The tolerance threshold 𝜖is set to 0.01 and the regularization hyperparameters 𝐶and 𝜆are selected by cross-validation using a number of folds equal to 3.