Beyond Unimodal: Generalising Neural Processes for Multimodal Uncertainty Estimation

Authors: Myong Chol Jung, He Zhao, Joanna Dipnall, Lan Du

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In extensive empirical evaluation, our method achieves state-of-the-art multimodal uncertainty estimation performance, showing its appealing robustness against noisy samples and reliability in out-of-distribution detection with faster computation time compared to the current state-of-the-art multimodal uncertainty estimation method. We conduct rigorous experiments on seven real-world datasets and achieve the new SOTA performance in classification accuracy, calibration, robustness to noise, and OOD detection. We evaluated the proposed method on the seven real-world datasets and compared its performance against seven unimodal and multimodal baselines.
Researcher Affiliation Academia Myong Chol Jung Monash University david.jung@monash.edu; He Zhao CSIRO s Data61 he.zhao@ieee.org; Joanna Dipnall Monash University jo.dipnall@monash.edu; Lan Du Monash University lan.du@monash.edu
Pseudocode Yes Figure 2d: Pseudocode of MNPs
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes In this experiment, we evaluated the classification performance of MNPs and the robustness to noisy samples with six multimodal datasets [24, 17]. These datasets lie within a feature space where each feature extraction method can be found in [17]. Table 6: Multimodal datasets used for evaluating robustness to noisy samples. We used CIFAR10-C [18] which consists of corrupted images of CIFAR10 [31].
Dataset Splits No Following [24] and [17], we normalised the datasets and used a train-test split of 0.8:0.2. This specifies a train-test split but does not explicitly define a separate validation split.
Hardware Specification Yes All the experiments were conducted on a single NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No For all the experiments, we used the Adam optimiser [30] with batch size of 200 and the Tensorflow framework. While TensorFlow is mentioned, no specific version number for the framework or other software dependencies are provided.
Experiment Setup Yes For all the experiments, we used the Adam optimiser [30] with batch size of 200 and the Tensorflow framework. Table 7: Hyperparameters of MNPs.