Energy-Inspired Self-Supervised Pretraining for Vision Models
Authors: Ze Wang, Jiang Wang, Zicheng Liu, Qiang Qiu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We support our findings with extensive experiments, and show the proposed method delivers comparable and even better performance with remarkably fewer epochs of training compared to the state-of-the-art self-supervised vision model pretraining methods. |
| Researcher Affiliation | Collaboration | Purdue University Microsoft Corporation {zewang, qqiu}@purdue.edu {jiangwang, zliu}@microsoft.com |
| Pseudocode | Yes | We further provide Py Torch-style pseudo code in Appendix Section A.3 to facilitate reproducing our results. |
| Open Source Code | No | The paper provides PyTorch-style pseudocode but does not include an explicit statement about releasing open-source code for the methodology or a link to a code repository. |
| Open Datasets | Yes | We demonstrate the effectiveness of the proposed method with extensive experiments on Image Net1K. |
| Dataset Splits | Yes | All Image Net results are evaluated on the validation set with a single center crop of 224 224 for each image. |
| Hardware Specification | No | The paper mentions training on 'a single 8-GPU machine' and lists '8 GPUs' in efficiency comparisons, but does not specify the exact GPU model, CPU model, or other detailed hardware specifications. |
| Software Dependencies | No | The paper states 'All experiments are implemented using Py Torch (Paszke et al., 2019)' but does not provide specific version numbers for PyTorch or any other software dependencies like Python or CUDA, which are required for reproducible ancillary software description. |
| Experiment Setup | Yes | We present the training details for both self-supervised training and finetuning in Table A. All experiments are implemented using Py Torch (Paszke et al., 2019). We use the default API for automatic mixed-precision training. |