Modeling Uncertainty by Learning a Hierarchy of Deep Neural Connections
Authors: Raanan Yehezkel Rohekar, Yaniv Gurwicz, Shami Nisimov, Gal Novik
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations of our method demonstrate significant improvement compared to state-of-the-art calibration and out-of-distribution detection methods. ... 4 Empirical Evaluation ... 4.1 An Ablation Study for Evaluating the Effect of Confounding with a Generative Process ... 4.2 Calibration ... 4.3 Out-of-Distribution Detection |
| Researcher Affiliation | Industry | Raanan Y. Rohekar Intel AI Lab raanan.yehezkel@intel.com Yaniv Gurwicz Intel AI Lab yaniv.gurwicz@intel.com Shami Nisimov Intel AI Lab shami.nisimov@intel.com Gal Novik Intel AI Lab gal.novik@intel.com |
| Pseudocode | Yes | Algorithm 1: BRAINet structure learning |
| Open Source Code | No | The paper does not provide concrete access to source code (no specific repository link, explicit code release statement, or code in supplementary materials). |
| Open Datasets | Yes | Empirical evaluations of our method demonstrate significant improvement compared to state-of-the-art calibration and out-of-distribution detection methods. ... for MNIST dataset [17] ... common UCI-repository [4] regression benchmarks ... Res Net-20 network [9], pre-trained on CIFAR-10 data. SVHN dataset [22] is used as the OOD samples. ... in-distribution: CIFAR-10, OOD: Tiny Image Net. |
| Dataset Splits | No | The paper mentions 'training data' but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or test sets. |
| Hardware Specification | No | BRAINet structure learning algorithm is implemented using BNT [20] and runs efficiently on a standard desktop CPU. (This is too vague and does not provide specific hardware details like CPU model, memory, or GPU information.) |
| Software Dependencies | No | The paper mentions 'MLP-layers (dense), Re LU activations, ADAM optimization [14], a fixed learning rate, and batch normalization [12]', which are components and techniques, but does not list specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | In all experiments, we used MLP-layers (dense), Re LU activations, ADAM optimization [14], a fixed learning rate, and batch normalization [12]. Unless otherwise stated, each experiment was repeated 10 times. |