Benchmarking Deep Inverse Models over time, and the Neural-Adjoint method
Authors: Simiao Ren, Willie Padilla, Jordan Malof
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using this metric, we compare several state-of-the-art inverse modeling approaches on four benchmark tasks: two existing tasks, a new 2-dimensional sinusoid task, and a challenging modern task of meta-material design. |
| Researcher Affiliation | Academia | Dept. of Electrical and Computer Engineering Duke University Durham, NC 27705 |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | We release code for all inverse models, as well as (fast) simulation software for each benchmark problem, so that other researchers can easily repeat our experiments. 1https://github.com/Benson Ren/BDIMNNA |
| Open Datasets | Yes | Inspired by the recent benchmark study [2], we include two popular existing tasks: ballistics targeting (D1), and robotic arm control (D3). For these two tasks we use the same experimental designs as [2], including their simulator (i.e., forward model) parameters, simulator sampling procedures, and their training/testing splits. |
| Dataset Splits | Yes | For these two tasks we use the same experimental designs as [2], including their simulator (i.e., forward model) parameters, simulator sampling procedures, and their training/testing splits. All details can be found in [2] and our supplement. |
| Hardware Specification | No | The paper mentions 'common hardware' but does not provide specific details like GPU or CPU models. |
| Software Dependencies | No | The paper mentions using 'modern deep learning software packages' but does not list specific software names with version numbers. |
| Experiment Setup | No | All models utilized the same training and testing data, batch size, and stopping criteria (for training). In those cases where model hyperparameters were not available from [2], we budgeted approximately one day of computation time (on common hardware) to optimize hyperparameters, while again constraining model sizes. Full implementation details can be found in the supplementary material. |