Reducing Network Agnostophobia
Authors: Akshay Raj Dhamija, Manuel Günther, Terrance Boult
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on networks trained to classify classes from MNIST and CIFAR-10 show that our novel loss functions are significantly better at dealing with unknown inputs from datasets such as Devanagari, Not MNIST, CIFAR-100, and SVHN. |
| Researcher Affiliation | Academia | Akshay Raj Dhamija, Manuel G unther, and Terrance E. Boult Vision and Security Technology Lab, University of Colorado Colorado Springs {adhamija | mgunther | tboult} @ vast.uccs.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available.1 1http://github.com/Vastlab/Reducing-Network-Agnostophobia |
| Open Datasets | Yes | Experiments on networks trained to classify classes from MNIST and CIFAR-10 show that our novel loss functions are significantly better at dealing with unknown inputs from datasets such as Devanagari, Not MNIST, CIFAR-100, and SVHN. |
| Dataset Splits | No | The paper does not provide specific train/validation/test dataset splits with percentages or sample counts for reproduction, nor does it explicitly mention the use of a dedicated validation set for model tuning. |
| Hardware Specification | No | The paper does not specify the hardware used to run experiments, such as specific GPU or CPU models, or details about the computing environment. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow x.x, PyTorch x.x) for reproducing the experiments. |
| Experiment Setup | No | The paper does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text. |