Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection
Authors: Yunqian Fan, Xiuying Wei, Ruihao Gong, Yuqing Ma, Xiangguo Zhang, Qi Zhang, Xianglong Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been done on a wide variety of models including keypoint-, anchor-, curve-, and segmentation-based ones. Our method produces quantized models in minutes on a single GPU and can achieve 6.4% F1 Score improvement on the CULane dataset. |
| Researcher Affiliation | Collaboration | Yunqian Fan1,2, Xiuying Wei2, Ruihao Gong2,4, Yuqing Ma3, 4, Xiangguo Zhang2, Qi Zhang2, Xianglong Liu4* 1School of Information Science and Technology, Shanghai Tech University 2Sense Time Research 3Institute of Artificial Intelligence, Beihang University, Beijing, China 4State Key Laboratory of Complex & Critical Software Environment, Beihang University, Beijing, China |
| Pseudocode | Yes | Algorithm 1: Sensitivity Aware Selection |
| Open Source Code | Yes | Code and supplementary statement can at found on https://github.com/Pannenets F/Selective Focus. |
| Open Datasets | Yes | We conduct comprehensive experiments on the CULane dataset and adopt its official evaluation method. CULane contains 88,880 training images and 34,680 test images from multiple scenarios, and the evaluation method provides precision, recall, and F1 score for each scenario. |
| Dataset Splits | No | CULane contains 88,880 training images and 34,680 test images from multiple scenarios. The paper does not explicitly mention a separate validation split or its size. |
| Hardware Specification | No | The paper mentions "on a single GPU" but does not specify the exact model or other hardware details like CPU, memory, or specific computing environment. |
| Software Dependencies | No | The paper states "We implement our method based on the Py Torch framework." but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Our method is calibrated with 512 unlabeled images on three kinds of quantization bits: W8A8, W8A4, and W4A4. During the optimization, we choose the Adam optimizer with a learning rate set as 0.000025 and adjust weights for 5000 iterations. Other hyperparameters including k for Top-k in Sensitivity Aware Selection is kept as 1 for models with two heads and 2 for others, based on our ablation studies. |