The Surprising Effectiveness of Equivariant Models in Domains with Latent Symmetry

Authors: Dian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S. Wong, Robin Walters, Robert Platt

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that this is indeed the case and that an inaccurate equivariant model is often better than a completely unstructured model. For example, suppose we want to model a function with the object-wise rotation symmetry expressed in Figure 1a and b. Notice that whereas it is difficult to encode the object-wise symmetry, it is easy to encode an image-wise symmetry because it involves simple image rotations. Although the image-wise symmetry model is imprecise in this situation, our experiments indicate that this imprecise model is still a much better choice than a completely unstructured model.
Researcher Affiliation Academia Dian Wang, Jung Yeon Park, Neel Sortur, Lawson L.S. Wong, Robin Walters , Robert Platt Northeastern University {wang.dian,park.jungy,sortur.n,l.wong,r.walters,r.platt}@northeastern.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Supplementary video and code are available at https://pointw.github.io/extrinsic_page/.
Open Datasets Yes We use the environments provided by the Bullet Arm benchmark (Wang et al., 2022b) implemented in the Py Bullet simulator (Coumans & Bai, 2016).
Dataset Splits Yes In all training, we perform a three-way data split with N training data, 200 holdout validation data, and 200 holdout test data.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models.
Software Dependencies Yes We implement the training in Py Torch (Paszke et al., 2017) using a cross-entropy loss. ... We use the Adam optimizer (Kingma & Ba, 2014)... We use the environments provided by the Bullet Arm benchmark (Wang et al., 2022b) implemented in the Py Bullet simulator (Coumans & Bai, 2016).
Experiment Setup Yes The pixel size of the image is 152x152 (and will be cropped to 128x128 during training). We implement the training in Py Torch (Paszke et al., 2017) using a cross-entropy loss. ... We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10^-4. The batch size is 64.