A Persuasive Approach to Combating Misinformation

Authors: Safwan Hossain, Andjela Mladenovic, Yiling Chen, Gauthier Gidel

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we experimentally validate that our approach significantly reduces misinformation in both the single round and performative setting. and 7. Experiments We now experimentally validate our approach: specifically, while we provide detailed theoretical results on the platform utility under optimal signaling, it is instructive to see how this translates into reducing misinformation sharing.
Researcher Affiliation Academia 1Harvard University 2Mila, Universit e de Montr eal.
Pseudocode No No structured pseudocode or algorithm blocks are provided. The paper describes mathematical formulations and theoretical proofs.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets No Due to a lack of public data, we create a synthetic dataset for the three components of a noisy persuasion instance: the prior distribution, platform utility, and user utility. There is no access information provided for this synthetic dataset.
Dataset Splits No The paper does not provide specific dataset split information (percentages, sample counts, or citations to predefined splits) for training, validation, and test sets. It mentions 'validation/popularity states' and 'validation classifier' in the context of its model, not data splits.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper describes how the synthetic dataset was created and how classifier error was varied ('For each instance, we vary the classification error between 0 to 0.4 (the error is equally divided amongst all the incorrect classes)'), but it does not provide specific hyperparameters or training configurations for any model that would be used in an experiment.