Multi-task Learning with 3D-Aware Regularization
Authors: Wei-Hong Li, Steven McDonagh, Ales Leonardis, Hakan Bilen
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that the proposed method is architecture agnostic and can be plugged into various prior multi-task backbones to improve their performance; as we evidence using standard benchmarks NYUv2 and PASCAL-Context. |
| Researcher Affiliation | Academia | 1University of Edinburgh, 2University of Birmingham |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | github.com/VICO-Uo E/3DAware MTL |
| Open Datasets | Yes | NYUv2 (Silberman et al., 2012): It contains 1449 RGB-D images...PASCAL-Context (Chen et al., 2014): PASCAL (Everingham et al., 2010) is a commonly used image benchmark for dense prediction tasks. |
| Dataset Splits | No | The paper mentions following 'identical training, evaluation protocols' from other works (Ye & Xu, 2022) but does not explicitly provide the specific percentages or counts for train/validation/test splits within the paper itself. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models. It only refers to 'standard GPUs' generally when discussing memory limitations. |
| Software Dependencies | No | The paper mentions using frameworks and models like MTI-Net, Inv PT, HRNet-48, and ViT-L, but it does not specify the version numbers of any software dependencies (e.g., PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We append our 3D-aware regularizer to MTI-Net and Inv PT using two convolutional layers, followed by Batch Norm, ReLU, and dropout layer with a dropout rate of 0.15...We train all models for 40K iterations with a batch size of 6 for experiments of using Inv PT...and a batch size of 8 for experiments of using MTI-Net...We ramp up the αt from 0 to 4 linearly in 20K iterations and keep αt = 4 for the rest 20K iterations. |