Decodable and Sample Invariant Continuous Object Encoder
Authors: Dehao Yuan, Furong Huang, Cornelia Fermuller, Yiannis Aloimonos
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply HDFE to function-to-function mapping, where vanilla HDFE achieves competitive performance with the state-of-the-art algorithm. We apply HDFE to point cloud surface normal estimation, where a simple replacement from Point Net to HDFE leads to 12% and 15% error reductions in two benchmarks. In addition, by integrating HDFE into the Point Net-based SOTA network, we improve the SOTA baseline by 2.5% and 1.7% on the same benchmarks. |
| Researcher Affiliation | Academia | Dehao Yuan, Furong Huang, Cornelia Ferm uller & Yiannis Aloimonos Department of Computer Science University of Maryland College Park, MD 20740, USA {dhyuan, furongh, fermulcm, jyaloimo}@umd.edu |
| Pseudocode | Yes | Algorithm 1 Iterative Refinement, Algorithm 2 Gradient Descent for Decoding Function Encoding, Algorithm 3 One-Shot Refinement |
| Open Source Code | No | The paper does not provide a direct link to a code repository or explicitly state that source code for their methodology is released or available in supplementary materials. |
| Open Datasets | Yes | We use 1d Burgers Equation (Su & Gardner, 1969) and 2d Darcy Flow (Tek, 1957) for evaluating our method. [...] We use the root mean squared angle error (RMSE) as the metrics, evaluated on the PCPNet (Guerrero et al., 2018) and Famous Shape (Li et al., 2023) datasets. |
| Dataset Splits | No | The paper describes training and testing phases and data used, including specific training sample counts in some sections, but it does not explicitly mention or provide details about a separate validation dataset split, its percentages, or sample counts. |
| Hardware Specification | No | The paper mentions 'NVIDIA Titan-X GPU' in Appendix I.3 in the context of encoding time, but it does not provide specific hardware details (like model numbers or general setup) for running the main experiments described in the paper. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and names architectures like 'Deep Complex Network', 'Complex Linear', and 'Complex Re LU', but it does not specify version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | The network is trained with Adam optimizer with a learning rate of 0.001 for 20,000 iterations. The α value in equation 6 is 15, 25, 42, 45 for N = 4000, 8000, 16000, 24000 and the β value is 2.5. |