Anytime Dense Prediction with Confidence Adaptivity
Authors: Zhuang Liu, Zhiqiu Xu, Hung-Ju Wang, Trevor Darrell, Evan Shelhamer
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate ADP-C on Cityscapes semantic segmentation and MPII human pose estimation: our method enables anytime inference without sacrificing accuracy while also reducing the total FLOPs of its base models by 44.4% and 59.1%. |
| Researcher Affiliation | Collaboration | Zhuang Liu1 Zhiqiu Xu1 Hung-Ju Wang1 Trevor Darrell1 Evan Shelhamer2 1University of California, Berkeley 2Adobe Research |
| Pseudocode | No | The paper describes its methods in prose and equations but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/liuzhuang13/anytime. |
| Open Datasets | Yes | The Cityscapes dataset (Cordts et al., 2016)... MPII Human Pose dataset (Andriluka et al., 2014)... |
| Dataset Splits | Yes | We train the models with the training set and report results on the validation set. ...on its validation set. |
| Hardware Specification | Yes | (specifically we measure computation time on a Linux machine with Intel Xeon Gold 5220R CPUs using 16 threads). |
| Software Dependencies | No | Our experiments are implemented using Py Torch (Paszke et al., 2019). No specific version number for PyTorch or other libraries is provided. |
| Experiment Setup | Yes | During training, multi-scale and flipping data augmentation is used, and the input cropping size is 512 1024. The model is trained for 484 epochs, with an initial learning rate of 0.01 and a polynomial schedule of power 0.9, a weight decay of 0.0005, a batch size of 12, optimized by SGD with 0.9 momentum. |