AIO-P: Expanding Neural Performance Predictors beyond Image Classification
Authors: Keith G. Mills, Di Niu, Mohammad Salameh, Weichen Qiu, Fred X. Han, Puyuan Liu, Jialin Zhang, Wei Lu, Shangling Jui
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that AIO-P can achieve Mean Absolute Error (MAE) and Spearman s Rank Correlation (SRCC) below 1% and above 0.5, respectively, on a breadth of target downstream CV tasks with or without fine-tuning, outperforming a number of baselines. |
| Researcher Affiliation | Collaboration | Keith G. Mills1,2*, Di Niu1, Mohammad Salameh2, Weichen Qiu1, Fred X. Han2, Puyuan Liu2, Jialin Zhang3, Wei Lu2, Shangling Jui3 1Department of Electrical and Computer Engineering, University of Alberta 2Huawei Technologies, Edmonton, Alberta, Canada 3Huawei Kirin Solution, Shanghai, China {kgmills, dniu, wqiu1}@ualberta.ca {mohammad.salameh, fred.xuefei.han1, puyuan.liu, jui.shangling}@huawei.com {zhangjialin10, robin.luwei}@hisilicon.com |
| Pseudocode | No | The paper does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format. |
| Open Source Code | Yes | We opensource1 our data, code, and predictor design to advance research in this field. 1https://github.com/Ascend-Research/AIO-P |
| Open Datasets | Yes | We consider the 2017 version of MS Common Objects in Context (COCO) (Lin et al. 2014) as our dataset for these tasks, as it contains 118k and 5k training and validation images, respectively. ... We measure 2D HPE performance using Percentage of Correct Keypoints (PCK) and consider two HPE datasets: MPII (Andriluka et al. 2014) and Leeds Sports Pose-Extended (LSP) (Johnson and Everingham 2011), which contain 22k and 11k images, respectively. |
| Dataset Splits | Yes | We consider the 2017 version of MS Common Objects in Context (COCO) (Lin et al. 2014) as our dataset for these tasks, as it contains 118k and 5k training and validation images, respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or cloud computing instance specifications. |
| Software Dependencies | No | The paper mentions software like 'Tensor Flow' and 'Detectron2' but does not specify version numbers for these or any other key software components, which is required for reproducibility. |
| Experiment Setup | No | The paper discusses various techniques and general training procedures, such as introducing K-Adapters and using scaling techniques. However, it explicitly states: 'We provide procedural details on our round robin strategy, shared head hyperparameters and resource cost breakdown in the supplementary materials.' This indicates that specific hyperparameters and detailed training settings are not present in the main text of the paper. |