Improving Evidential Deep Learning via Multi-Task Learning
Authors: Dongpin Oh, Bonggun Shin7895-7903
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first qualitatively evaluate the MT-ENet using a synthetic regression dataset. Fig 4 represents our synthetic data and the model predictions. We evaluate the performance of the MT-ENet in comparison with strong baselines through the UCI regression benchmark datasets. The MT-ENet generally provides the best or comparable RMSE and NLL performances. The MT-ENet and MSE ENet successfully improve the predictive accuracy metrics (CI, MSE) for both datasets. |
| Researcher Affiliation | Industry | Dongpin Oh,1 Bonggun Shin2 1Deargen Inc., Seoul, South Korea 2Deargen USA Inc., Atlanta, GA |
| Pseudocode | No | The paper describes the proposed methods in narrative text and mathematical formulations. It does not include a dedicated pseudocode block or algorithm figure. |
| Open Source Code | Yes | 1Code is at https://github.com/deargen/MT-ENet. |
| Open Datasets | Yes | We evaluate the performance of the MT-ENet in comparison with strong baselines through the UCI regression benchmark datasets. Our experiments use two well-known benchmark datasets in the DTA literature: Davis (Davis et al. 2011) and Kiba (Tang et al. 2014). We examine the out-of-distribution (OOD) detection capability of the MT-ENet on the curated Binding DB dataset (Liu et al. 2007). |
| Dataset Splits | Yes | For a total of three times, we randomly split the Binding DB dataset into the training (80%), validation (10%), and test (10%) datasets. |
| Hardware Specification | No | The paper does not specify the hardware used for its experiments, such as particular CPU or GPU models, or details about the computing environment. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with their version numbers required to reproduce the experiments. |
| Experiment Setup | No | The paper states that 'The experimental settings and model architecture used in this study are identical to those used by (Hern andez-Lobato and Adams 2015)' and 'The details of training and the model architectures are available in Appendix C.' However, the main text itself does not provide concrete hyperparameter values or system-level training settings. |