Accurate Interpolation for Scattered Data through Hierarchical Residual Refinement
Authors: Shizhe Ding, Boyang Xia, Dongbo Bu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that HINT outperforms existing interpolation algorithms significantly in terms of interpolation accuracy across a wide variety of datasets, which underscores its potential for practical scenarios. We evaluate the interpolation performance of HINT on four representative datasets and compare it with existing state-of-the-art approaches. |
| Researcher Affiliation | Academia | Shizhe Ding1,2, Boyang Xia1,2, Dongbo Bu1,2,3 1 Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2 University of Chinese Academy of Sciences, Beijing, China 3 Central China Institute for Artificial Intelligence Technologies, Zhengzhou, China |
| Pseudocode | No | The paper describes its method verbally and with architectural diagrams (Figure 1, Figure 2), but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing open-source code or include a link to a code repository for the described methodology. |
| Open Datasets | Yes | The four datasets are Mathit-2D and Perlin for synthetic function interpolation, TFRD for temperature field reconstruction, and PTV for particle tracking velocimetry. ... Mathit-2D [4] ... Perlin ... [23] ... TFRD [2] ... PTV [4] |
| Dataset Splits | No | The paper describes how observed and target points are formed for each interpolation task, which implicitly serve as training input and test data. However, it does not explicitly mention a separate 'validation' dataset split for hyperparameter tuning. |
| Hardware Specification | No | The paper states that 'The numerical calculations in this study were supported by ICT Computer-X center' but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed to replicate the experiment. |
| Experiment Setup | Yes | We carefully select and tune several hyperparameters of HINT, including the number of interpolation blocks L, the number of attention layers in each block, the local parameter K0, and the minimum local parameter Kmin. For the Mathit-2D dataset, we set L = 2, with 6 attention layers in the main block and 2 attention layers in the residue block. As for Perlin, PTV, and TFRD datasets, we set L = 4 with 2 attention layers in each block. ... Regarding the local parameter K(0), we set it to the number of observed points in the interpolation task for Mathit-2D, Perlin, and TFRD datasets... For the PTV dataset, we set K(0) to 128... Additionally, we set the minimum local parameter Kmin to 8 for all datasets. |