SlimSAM: 0.1% Data Makes Segment Anything Slim
Authors: Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive assessments across performance metrics, efficiency, and training data requirements reveal that Slim SAM markedly enhances compression performance, concurrently achieving superior lightweight and efficiency with markedly reduced training data requirements. |
| Researcher Affiliation | Academia | Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang National University of Singapore zigeng99@u.nus.edu, xinchao@nus.edu.sg |
| Pseudocode | No | The paper describes the method conceptually and with diagrams (Figure 1, Figure 2) but does not include formal pseudocode blocks or algorithms. |
| Open Source Code | Yes | Code is available at https://github.com/czg1225/Slim SAM |
| Open Datasets | Yes | Our Slim SAM has been implemented in Py Torch [41] and trained on a single Nvidia Titan RTX GPU using only 0.1% (10,000 images) of the SA-1B [25] dataset. |
| Dataset Splits | No | The paper mentions 'validation performance' for early stopping, but it does not provide specific details on how the 10k training images are split for validation or if a separate validation set is used from SA-1B. |
| Hardware Specification | Yes | Our Slim SAM has been implemented in Py Torch [41] and trained on a single Nvidia Titan RTX GPU |
| Software Dependencies | No | The paper mentions 'PyTorch' as the implementation framework, but it does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The model s parameters were optimized through the ADAM [24] algorithm with a batch size of 4. Training settings for both bottleneck aligning and embedding aligning are identical. The pruned models undergo distillation with an initial learning rate of 1e 4, which will be reduced by half if validation performance does not improve for 4 consecutive epochs. The total training duration is 40 epochs for Slim SAM-50 (with a 50% pruning ratio) and 80 epochs for Slim SAM-77 (with a 77% pruning ratio). |