Label Encoding for Regression Networks
Authors: Deval Shah, Zi Yu Xue, Tor Aamodt
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate BEL on four complex regression problems: head pose estimation, facial landmark detection, age estimation, and end-to-end autonomous driving. |
| Researcher Affiliation | Academia | Deval Shah, Zi Yu Xue & Tor M. Aamodt Department of Electrical and Computer Engineering University of British Columbia, Vancouver, BC, Canada |
| Pseudocode | No | The paper describes methods through text and figures (e.g., Figure 1 for training/inference flow), but does not contain a dedicated pseudocode block or algorithm listing. |
| Open Source Code | Yes | Code is available at https://github.com/ubc-aamodt-group/BEL_regression. We have provided the training and inference code with trained models. |
| Open Datasets | Yes | We follow the evaluation setting of Hopenet (Ruiz et al., 2018) and FSA-Net (fsa, 2019) and use two evaluation protocols with three widely used datasets: 300W-LP (Zhu et al., 2016), BIWI (Fanelli et al., 2013), and AFLW2000 (Zhu et al., 2016). |
| Dataset Splits | Yes | In these experiments 20% of the training set is used as validation set and the validation error is used to choose the best BEL approach. |
| Hardware Specification | Yes | All experiments are conducted on a Linux machine with an Intel i9-9900X processor and an Nvidia RTX 2080 Ti GPU with 11GB of memory. |
| Software Dependencies | Yes | Our code is implemented using Python 3.8.3 with Pytorch 1.5.1 using CUDA 10.2. |
| Experiment Setup | Yes | We use two runs with different random seeds for each combination of learning rate {0.001, 0.0001, 0.00001} and batch size {8, 16} are used for hyperparameter tuning. |