Machine Learning for Utility Prediction in Argument-Based Computational Persuasion

Authors: Ivan Donadello, Anthony Hunter, Stefano Teso, Mauro Dragoni5592-5599

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate EAI and EDS in a simulation setting and in a realistic case study concerning healthy eating habits. Results are promising in both cases, but EDS is more effective at predicting useful utility functions.
Researcher Affiliation Academia 1 Free University of Bozen-Bolzano, Italy 2 University College London, United Kingdom 3 Fondazione Bruno Kessler, Italy 4 University of Trento, Italy
Pseudocode Yes Algorithm 1: Sim Dialogue(T, L, up, uo, δ)
Open Source Code Yes The source code and the supplementary material are online at shorturl.at/oy KV3
Open Datasets No The paper describes the generation of synthetic datasets and the creation of a dataset from user profiles, but it does not provide concrete access information (e.g., URL, DOI, or formal citation to an existing public dataset) for a publicly available or open dataset.
Dataset Splits Yes We use the k-fold cross validation technique. The dataset Uo T,i is split into k parts, k 1 parts are used as training set for Sim Dialogue(ML) and the remaining part is left as test set for both Sim Dialogue(ML) and Sim Dialogue. In this way, k splits/folds of the original dataset Uo T,i are obtained and for each split we run both Sim Dialogue(ML) and Sim Dialogue. ... k = 5 for the k-fold cross validation.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software components like SVR, KMeans, and Random Forest, but does not provide specific version numbers for these or other dependencies, which are necessary for reproducibility.
Experiment Setup Yes The hyperparameters for SVR are C = 1, ϵ = 0.1 and the radial basis function as a kernel. ... The random forest in CRAMER has 100 estimators with the minimum number of samples required: i) to split a node is 2, ii) to be a leaf is 1. ... Other parameters have a single value: the number of synthetic trees (|T| = 10) and datasets (|Uo T | = 10), the size of Uo T,i is 2000, the cluster variance σ2 C is 1.0, the discount factor δ in Bimaximax is 1 as it is not relevant for the simulations and k = 5 for the k-fold cross validation.