Neural Representation and Learning of Hierarchical 2-additive Choquet Integrals

Authors: Roman Bresson, Johanne Cohen, Eyke Hüllermeier, Christophe Labreuche, Michèle Sebag

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical validation of NEUR-HCI on real-world and artificial benchmarks demonstrates the merits of the approach compared to state-of-art baselines. This section reports on the empirical performance of NEUR-HCI comparatively to the state of the art.
Researcher Affiliation Collaboration 1 Thales Research and Technology, 91767 Palaiseau, France 2 LRI, CNRS INRIA, Universit e Paris-Saclay, 91400 Orsay, France 3 Department of Computer Science, Paderborn University, 33098 Paderborn, Germany
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes the neural network architecture and its components in narrative text and diagrams (Figures 2 and 3).
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It does not include a specific repository link or an explicit code release statement.
Open Datasets Yes The standard MCDM benchmarks include CPU, CEV, LEV, MPG, Den Bosch (DB), Mammographics (MG), Journal8, Boston Housing9, Titanic10 and the Dagstuhl-15512 Arguments Quality corpus11 [Wachsmuth et al., 2017]. The last one, reporting the preferences of three decision makers, yields three sub-datasets referred to as Arguments 1, Arguments 2, Arguments 3 (each one being associated with a sin-gle decision maker). Footnotes provide URLs for: 1Dataset: https://archive.ics.uci.edu/ml/datasets/car+evaluation, 8https://cs.uni-paderborn.de/?id=63916, 9http://lib.stat.cmu.edu/datasets/boston, 10https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/ problem12.html, 11http://argumentation.bplaced.net/arguana/data
Dataset Splits Yes Each dataset is randomly split into an 80% train and 20% test sets; the performance of the model trained from the train set is measured on the test set, and averaged over 1,000 random splits.
Hardware Specification Yes The MLP and NEUR-HCI computational costs are below 5 minutes for each dataset on an Intel i7.
Software Dependencies No The paper mentions 'Matlab implementation' for CUR but does not provide specific version numbers for any software dependencies or libraries used for NEUR-HCI, such as neural network frameworks or programming language versions.
Experiment Setup Yes NEUR-HCI hyper-parameters include the regularization weight K, set to 0 after a few preliminary experiments. The actual number of sigmoids is minimized through L1 regularization, Eq. 10. Multilayer perceptron (MLP) with 1 fully connected hidden layer of n2 neurons, sigmoid activation function.