Differentiable Spline Approximations
Authors: Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show applications of our approach in three stylized applications: image segmentation, 3D point cloud reconstruction, and finite element analysis for the solution of partial differential equations. 3 Experiments We have implemented the DSA framework (and its different applications provided below)... |
| Researcher Affiliation | Academia | Minsu Cho1 Aditya Balu2 Ameya Joshi1 Anjana Deva Prasad2 Biswajit Khara2 Soumik Sarkar2 Baskar Ganapathysubramanian2 Adarsh Krishnamurthy2 Chinmay Hegde1 New York University1, Iowa State University2 {mc8065, ameya.joshi, chinmay.h}@nyu.edu {baditya, anjana, bkhara, soumiks, baskarg, adarsh}@iastate.edu |
| Pseudocode | Yes | Algorithm 1 Backward pass for NURBS Jacobian (for one curve point , C(u)) |
| Open Source Code | Yes | We also open-source the code at https://github.com/idealab-isu/DSA. |
| Open Datasets | Yes | We train two models (MDSA, Mbaseline) on two different segmentation tasks: the Weizmann horse dataset [Borenstein and Ullman, 2004] and the Broad Bioimage Benchmark Collection dataset [Ljosa et al., 2012] (publicly available under Creative Commons License). The Spline Dataset, which is a subset of surfaces extracted from the ABC dataset (available for public use under this license). |
| Dataset Splits | No | The paper mentions splitting datasets into train (85%) and test (15%) for the Weizmann horse and Broad Bioimage Benchmark Collection datasets, but does not explicitly state a separate validation split or its details. |
| Hardware Specification | Yes | All the experiments were performed using a local cluster with 6 compute nodes and each node having 2 GPUs (Tesla V100s with 32GB GPU memory). |
| Software Dependencies | No | The paper mentions 'extending autograd functions in Pytorch' and 'CUDA for GPU support', but does not specify version numbers for PyTorch, CUDA, or other key software components. |
| Experiment Setup | Yes | We use the same architecture and hyper-parameters for both models (see Appendix for details.) All training was done using a single GPU. |