UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers
Authors: Dachuan Shi, Chaofan Tao, Ying Jin, Zhendong Yang, Chun Yuan, Jiaqi Wang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various tasks, datasets, and model architectures demonstrate the effectiveness and versatility of the proposed UPop framework. |
| Researcher Affiliation | Collaboration | 1Tsinghua University 2Shanghai AI Laboratory 3The University of Hong Kong 4The Chinese University of Hong Kong. Work was done when Dachuan Shi was an intern at Shanghai AI Laboratory. |
| Pseudocode | Yes | The proposed UPop framework combines Uniļ¬ed Pruning and Progressive Pruning as outlined in Algorithm 1. |
| Open Source Code | Yes | The code is available at https://github.com/sdc17/UPop. |
| Open Datasets | Yes | NLVR2 (Suhr et al., 2018) COCO (Lin et al., 2014) VQAv2 (Goyal et al., 2017) Flickr30K (Young et al., 2014) Image Net (Deng et al., 2009) ADE20k (Zhou et al., 2017) |
| Dataset Splits | No | Table 3: Compression results on the NLVR2. ... Dev Acc Test Acc - While it uses dev/test sets, it doesn't specify the split ratios or counts for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running experiments. |
| Software Dependencies | No | The paper mentions optimizers like Adam W and SGD, data augmentation techniques like Random Augment, and references the 'MMSegmentation (Contributors, 2020)' for one task, but it does not specify explicit version numbers for software frameworks or libraries like PyTorch, TensorFlow, or specific library versions like MMSegmentation vX.Y. |
| Experiment Setup | Yes | Table 11: Training hyperparameters for compressing BLIP-based models. and Table 12: Training hyperparameters for compressing CLIP, Dei T, and Segmenter. |