Improving Deep Regression with Ordinal Entropy

Authors: Shihao Zhang, Linlin Yang, Michael Bi Mi, Xiaoxu Zheng, Angela Yao

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression.
Researcher Affiliation Collaboration 1National University of Singapore, Singapore, 2Huawei International Pte Ltd, Singapore
Pseudocode No No pseudocode or algorithm block was found in the paper.
Open Source Code Yes Code can be found here: https://github.com/needylove/Ordinal Entropy
Open Datasets Yes NYU-Depth-v2 (Silberman et al., 2012) provides indoor images with the corresponding depth maps at a pixel resolution 640 480.
Dataset Splits No No explicit specification of training/validation/test splits with percentages or counts was provided. For real-world datasets, it mentions 'train/test split given used by previous works (Bhat et al., 2021; Yuan et al., 2022)' and for synthetic data, it only specifies '1k data as the training set and test on the testing set with 100k samples' without detailing a validation split.
Hardware Specification No No specific hardware details such as GPU models (e.g., NVIDIA A100), CPU models, or detailed computer specifications were provided for running the experiments.
Software Dependencies No No specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9') were provided for replicating the experiment.
Experiment Setup Yes We use the trade-off parameters λd, λt the same value of 0.001, 1, 10, 1, for operator learning, depth estimation, crowd counting and age estimation, respectively.