LDReg: Local Dimensionality Regularized Self-Supervised Learning

Authors: Hanxun Huang, Ricardo J. G. B. Campello, Sarah Monazam Erfani, Xingjun Ma, Michael E. Houle, James Bailey

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of LDReg in terms of representation quality, such as training a linear classifier on top of frozen representations. We use Sim CLR (Chen et al., 2020a), Sim CLR-Tuned (Garrido et al., 2023b), BYOL (Grill et al., 2020), and MAE (He et al., 2022) as baselines. We perform our evaluation with Res Net-50 (He et al., 2016) (for Sim CLR, Sim CLR-Tuned, and BYOL) and Vi T-B (Dosovitskiy et al., 2021) (for Sim CLR and MAE) on Image Net (Deng et al., 2009).
Researcher Affiliation Academia 1School of Computing and Information Systems, The University of Melbourne, Australia 2Department of Mathematics and Computer Science, University of Southern Denmark, Denmark 3School of Computer Science, Fudan University, China 4Department of Computer Science, New Jersey Institute of Technology, USA
Pseudocode Yes Appendix J PSEUDOCODE Algorithm 1: Method of moments for LID estimation using pytorch pseudocode. Algorithm 2: LDReg using pytorch pseudocode.
Open Source Code Yes We provide source code for reproducing the experiments in this paper, which can be accessed here: https://github.com/Hanxun H/LDReg.
Open Datasets Yes We perform our evaluation with Res Net-50 (He et al., 2016) ... on Image Net (Deng et al., 2009). ... Food-101 (Bossard et al., 2014), CIFAR (Krizhevsky & Hinton, 2009), Birdsnap (Berg et al., 2014), Stanford Cars (Krause et al., 2013), and DTD (Cimpoi et al., 2014). ... COCO dataset (Lin et al., 2014).
Dataset Splits Yes We perform our evaluation with Res Net-50 (He et al., 2016) (for Sim CLR, Sim CLR-Tuned, and BYOL) and Vi T-B (Dosovitskiy et al., 2021) (for Sim CLR and MAE) on Image Net (Deng et al., 2009).
Hardware Specification Yes We conducted our experiments on Nvidia A100 GPUs with Py Torch implementation, with each experiment distributed across 4 GPUs.
Software Dependencies No The paper mentions 'Py Torch implementation', 'detectron2', and 'VISSL library' but does not provide specific version numbers for these software components.
Experiment Setup Yes Detailed hyperparameter settings can be found in Tables 5-11. We use 100 epochs of pretraining and a batch size of 2048 as defaults. For LDReg regularization, we use k = 128 as the default neighborhood size. For Res Net-50, we use β = 0.01 for Sim CLR and Sim CLR Tuned, β = 0.005 for BYOL. For Vi T-B, we use β = 0.001 for Sim CLR, and β = 5 10 6 for MAE.