Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning Manifold Patch-Based Representations of Man-Made Shapes

Authors: Dmitriy Smirnov, Mikhail Bessmeltsev, Justin Solomon

ICLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate its benefits by applying it to the task of sketch-based modeling. Given a raster image, our system infers a set of parametric surfaces that realize the input in 3D... We develop a testbed for sketch-based modeling, demonstrate shape interpolation, and provide comparison to related work. 4 EXPERIMENTAL RESULTS We train each network for 24 hours on a Tesla V100 GPU... At each iteration, we sample 7,000 points from the predicted and target shapes. We perform an ablation study of our method on an airplane model, demonstrating the effect of training without each term in our loss function...
Researcher Affiliation Academia Dmitriy Smirnov MIT EMAIL Mikhail Bessmeltsev Universit e de Montr eal EMAIL Justin Solomon MIT EMAIL
Pseudocode No The paper describes algorithms in text and equations but does not present them in pseudocode blocks or explicitly labeled algorithm sections.
Open Source Code No The paper does not explicitly state that source code for the described methodology is released or provide a link to a code repository.
Open Datasets Yes We train on the airplane, bathtub, guitar, bottle, car, mug, gun, andknifecategoriesof Shape Net Core(v2)(Changetal.,2015).
Dataset Splits Yes We pick a random 10%-90% test-train split for each category and evaluate in Fig. 5 as well as A.5.
Hardware Specification Yes We train each network for 24 hours on a Tesla V100 GPU, using Adam (Kingma & Ba, 2014) and batch size 8 with learning rate 0.0001.
Software Dependencies No The paper mentions using Adam optimizer and ResNet-18 architecture, but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions, CUDA versions, or other libraries).
Experiment Setup Yes We train each network for 24 hours on a Tesla V100 GPU, using Adam (Kingma & Ba, 2014) and batch size 8 with learning rate 0.0001. At each iteration, we sample 7,000 points from the predicted and target shapes. For models scaled to fit in a unit sphere, we use αnormal = 0.008, αflat = 2, and αcoll = 0.00001 for all experiments, and αtemplate =0.0001 and αsym =1 for experiments that use those regularizers.