Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization

Authors: Biao Zhang, Peter Wonka

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our model using the problem of 3d shape reconstruction from a single image and improve upon the state of the art. We compare our method with a list of state-of-the-art methods quantitatively in Table 2. We improve the most important metric, F-score, from 51.75% to 59.66% compared to the previous state of the art Occ Net (Mescheder et al., 2019). We also improve upon Occ Net in the two other metrics. The experiments are evaluated on a single image 3D reconstruction dataset and improve over the SOTA.
Researcher Affiliation Academia Biao Zhang & Peter Wonka KAUST {biao.zhang, peter.wonka}@kaust.edu.sa
Pseudocode Yes We describe the detailed training process in Algorithm 1. This algorithm can be viewed along with Fig. 2. We also show the full process how we generate a triangular mesh given an input image in Algorithm 2.
Open Source Code No The paper does not include an explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes We perform single image 3d reconstruction on the Shape Net (Chang et al., 2015) dataset. The rendered RGB images and data split are taken from (Choy et al., 2016).
Dataset Splits Yes The rendered RGB images and data split are taken from (Choy et al., 2016). At training time, 1024 points are drawn from the bounding box and 1024 nearsurface . This is the sampling strategy proposed by Cvx Net.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run the experiments. It only mentions general computing operations without hardware details.
Software Dependencies No The paper mentions software components like 'Efficient Net B1' and 'Adam' but does not provide specific version numbers for these or other libraries/frameworks, which are necessary for reproducible software dependencies.
Experiment Setup Yes We use λ = 0.005 for ridge regression and C = 1 for SVM in all experiments. The training batch size is 32. We use Adam (Kingma & Ba, 2014) with learning rate 2e 4 as our optimizer. The learning rate is decayed with a factor of 0.1 after 500 epochs.