Self-supervised learning of Split Invariant Equivariant representations

Authors: Quentin Garrido, Laurent Najman, Yann Lecun

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate significant performance gains over existing methods on equivariance related tasks from both a qualitative and quantitative point of view.
Researcher Affiliation Collaboration 1Meta AI FAIR 2Univ Gustave Eiffel, CNRS, LIGM, F-77454 Marne-la-Vall ee, France 3Courant Institute, New York University 4Center for Data Science, New York University.
Pseudocode No The paper does not include any pseudocode or algorithm blocks.
Open Source Code Yes Code and data are available at https://github.com/garridoq/SIE.
Open Datasets Yes Taking inspiration from 3DIdent (Zimmermann et al., 2021; von K ugelgen et al., 2021) we use renderings of 3D objects from the subset of Shape Net Core (Chang et al., 2015) originating from 3d Warehouse (Trimble Inc). This gives us a total 52472 objects spread across 55 classes.
Dataset Splits Yes We split the dataset into a training and validation part, containing respectively 80% and 20% of the objects.
Hardware Specification Yes All experiments are done using 4 NVIDIA V100 GPUs and take around 24 hours.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not specify its version or the versions of any other software libraries or frameworks used.
Experiment Setup Yes All methods are trained for 2000 epochs using a resnet-18 encoder and MLP projection head. We use a batch size of 1024 with the Adam optimizer with learning rate 10 3, β1 = 0.9 , β2 = 0.999.