Uncertainty Calibration for Ensemble-Based Debiasing Methods

Authors: Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Cheng, Zhi-Ming Ma, Yanyan Lan

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on NLI and fact verification tasks show that our proposed three-stage debiasing framework consistently outperforms the traditional two-stage one in out-of-distribution accuracy. 6 Experiments
Researcher Affiliation Collaboration Ruibin Xiong1,2,3 , Yimeng Chen2,4 , Liang Pang2,5, Xueqi Cheng1,2, Zhiming Ma2,4 and Yanyan Lan6 1CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Baidu Inc. 4Academy of Mathematics and Systems Science, Chinese Academy of Sciences 5Data Intelligence System Research Center, Institute of Computing Technology, Chinese Academy of Sciences 6 Institute for AI Industry Research, Tsinghua University
Pseudocode No The paper describes methods using text and mathematical formulations but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for their proposed methods.
Open Datasets Yes We conduct experiments on both fact verification and natural language inference... For this task, we use the training dataset provided by the FEVER challenge [31]... Finally, Fever-Symmetric datasets [30] (both version 1 and 2) are used as the test sets for evaluation. ... In this paper, we conduct our experiments on MNLI [37]...
Dataset Splits Yes The processing and split of the dataset into training/development set are conducted following Schuster et al. [30]5.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper mentions using a 'BERT-based classifier' but does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes For Learned-Mixin, the entropy term weight is set to the value suggested by Utama et al. [34]. For the Dirichlet calibrator, we set λ = 0.06 for all experiments, based on the in-distribution performance on the development sets.