Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Higher-Order Function Networks for Learning Composable 3D Object Representations
Authors: Eric Mitchell, Selim Engin, Volkan Isler, Daniel D Lee
ICLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the effectiveness of our method through various experiments on subsets of the Shape Net dataset. We ο¬nd that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. |
| Researcher Affiliation | Collaboration | 1Stanford University 2Samsung AI Center New York 3University of Minnesota |
| Pseudocode | No | The paper describes the model architecture and procedures in text and diagrams but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | See https://saic-ny.github.io/hof for code and additional information. |
| Open Datasets | Yes | We demonstrate the effectiveness of HOF on the task of 3D reconstruction from an RGB image using a subset of the Shape Net dataset (Chang et al., 2015). The dataset can be downloaded from https://github.com/xcyan/nips16_PTN. In our second experiment, we use a broader dataset based on Shape Net, with train and test splits taken from Tatarchenko et al. (2019). The dataset can be downloaded from https://github.com/lmb-freiburg/what3d. |
| Dataset Splits | Yes | The dataset contains 31773 ground truth point cloud models for training/validation and 7926 for testing. |
| Hardware Specification | Yes | All GPU experiments were performed on NVIDIA GTX 1080 Ti GPUs. The CPU running times were computed on one of 12 cores of an Intel 7920X processor. |
| Software Dependencies | No | The paper mentions software components like "Adam Optimizer" and "Re LU activation function" but does not specify their version numbers or the versions of broader frameworks/libraries (e.g., PyTorch, TensorFlow). |
| Experiment Setup | Yes | We use the Adam Optimizer with learning rate 1e-5 and batch size 1, training for 4 epochs for all experiments (1 epoch 725k parameter updates). |