SALD: Sign Agnostic Learning with Derivatives
Authors: Matan Atzmon, Yaron Lipman
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We tested SALD on the task of shape space learning from raw 3D data. We experimented with two different datasets: i) Shape Net dataset (Chang et al., 2015), containing synthetic 3D Meshes; and ii) D-Faust dataset (Bogo et al., 2017) containing raw 3D scans. |
| Researcher Affiliation | Academia | Matan Atzmon & Yaron Lipman Weizmann Institute of Science {matan.atzmon,yaron.lipman}@weizmann.ac.il |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | No explicit statement about open-source code release or a link to a repository was found in the paper. |
| Open Datasets | Yes | We experimented with two different datasets: i) Shape Net dataset (Chang et al., 2015), containing synthetic 3D Meshes; and ii) D-Faust dataset (Bogo et al., 2017) containing raw 3D scans. |
| Dataset Splits | Yes | We follow the evaluation protocol as in Deep SDF (Park et al., 2019): using the same train/test splits, we train and evaluate our method on 5 different categories. Note that comparison versus IGR is omitted as IGR requires consistently oriented normals for shape space learning, which is not available for Shape Net, where many models have non-consistent triangles orientation. |
| Hardware Specification | Yes | It took around 1.5 days to complete 3000 epochs with 4 Nvidia V100 32GB gpus. |
| Software Dependencies | Yes | Computing the unsigned distance to X is done using the CGAL library (The CGAL Project, 2020). |
| Experiment Setup | Yes | We trained our networks using the ADAM (Kingma & Ba, 2014) optimizer, setting the batch size to 64. On each training step the SALD loss is evaluated on a random draw of 922 points out of the precomputed 500K samples. For the VAE, we set a fixed learning rate of 0.0005, whereas for the AD we scheduled the learning rate to start from 0.0005 and decrease by a factor of 0.5 every 500 epochs. All models were trained for 3000 epochs. |