Expertise Trees Resolve Knowledge Limitations in Collective Decision-Making

Authors: Axel Abels, Tom Lenaerts, Vito Trianni, Ann Nowe

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide theoretical insights and empirically validate the improved performance of our novel approach on a range of problems for which existing methods proved to be inadequate.
Researcher Affiliation Academia 1Machine Learning Group, Universit e Libre de Bruxelles, Brussels, Belgium 2AI Lab, Vrije Universiteit Brussel 3Center for Human-Compatible AI, UC Berkeley, Berkeley, USA 4Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
Pseudocode Yes Algorithm 1 (Incremental) Expertise Tree
Open Source Code Yes The code to reproduce these results is provided at https: //github.com/axelabels/Expertise Trees.
Open Datasets Yes We evaluate on a variety of datasets presenting a diversity of feature distributions, arm counts, and reward distributions over the arms, chosen from the openml data repository (Vanschoren et al., 2014).
Dataset Splits No The paper does not explicitly provide information on specific training, validation, or test dataset splits (e.g., percentages or sample counts), nor does it mention cross-validation. It focuses on how the bandit problem is simulated and evaluated over time steps.
Hardware Specification No The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation Flanders (FWO) and the Flemish Government. No specific GPU, CPU models, or detailed hardware specifications are provided.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries and their versions).
Experiment Setup Yes Our results are averaged over 100 simulations, and for a varying number of arms and experts. The dataset selection and processing is provided in the supplementary information. ... We consider changes in expertise characterized by expertise heatmaps which map expertise contexts to expert quality. Each round, experts advice on a context vector # x t from which a subset of g features (chosen randomly at the start of the experiment) form the expertise context # z t. ... Heatmaps are obtained by assigning for each expert an expertise value of 0 or 1 to each of the regions (either 1, 4, 16 or 64 regions). ... results are averaged over T = 1000 steps for all expert counts (N {4, 32}).