Unimodal Probability Distributions for Deep Ordinal Classification
Authors: Christopher Beckham, Christopher Pal
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate this approach in the context of deep learning on two large ordinal image datasets, obtaining promising results. |
| Researcher Affiliation | Academia | 1Montr eal Institute of Learning Algorithms, Qu ebec, Canada. Correspondence to: Christopher Beckham <christopher.beckham@polymtl.ca>. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code will be made available here.5 https://github.com/christopher-beckham/deep-unimodalordinal |
| Open Datasets | Yes | Diabetic retinopathy1. ... 1https://www.kaggle.com/c/diabetic-retinopathy-detection/ |
| Dataset Splits | Yes | A validation set is set aside, consisting of 10% of the patients in the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like Theano, Lasagne, and Keras but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | All experiments utilise an ℓ2 norm of 10 4, ADAM optimiser (Kingma & Ba, 2014) with initial learning rate 10 3, and batch size 128. A manual learning rate schedule is employed where we manually divide the learning rate by 10 when either the validation loss or valid set QWK plateaus (whichever plateaus last) down to a minimum of 10 4 for Adience and 10 5 for DR. |