On Belief Change for Multi-Label Classifier Encodings

Authors: Sylvie Coste-Marquis, Pierre Marquis

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We characterize the family of rectification operators from an axiomatic perspective and exhibit operators from this family. We identify the standard belief change postulates that every rectification operator satisfies and those it does not. We also focus on some computational aspects of rectification and compliance.
Researcher Affiliation Academia Sylvie Coste-Marquis1 and Pierre Marquis1,2 1CRIL, Univ. Artois & CNRS, France 2Institut Universitaire de France, France {coste, marquis}@cril.fr
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It focuses on theoretical definitions and propositions.
Open Source Code No The paper does not provide any links to open-source code for the methodology described. It mentions a URL for a full-proof version of the paper, not code.
Open Datasets No The paper is theoretical and does not use or provide access to a specific dataset for training. It mentions 'training set' in a general context discussing machine learning concepts but not as part of its own experimental setup.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper does not specify any hardware used, as its focus is on theoretical contributions and computational complexity analysis rather than empirical experiments requiring specific hardware.
Software Dependencies No The paper mentions formalisms like "DNNF language" and "SDD" but does not list any specific software or library dependencies with version numbers that would be needed to replicate any practical implementation of the theoretical concepts.
Experiment Setup No The paper is theoretical and does not describe any experimental setup with hyperparameters or system-level training settings.