Autoinverse: Uncertainty Aware Inversion of Neural Networks
Authors: Navid Ansari, Hans-peter Seidel, Nima Vahidi Ferdowsi, Vahid Babaei
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify our proposed method through addressing a set of real-world problems in control, fabrication, and design. We evaluate the performance of Autoinverse through experimenting with the existing neural inverse methods and their uncertainty-aware counterparts. |
| Researcher Affiliation | Academia | Navid Ansari Max Planck Institute for Informatics Saarbrücken, Germany nansari@mpi-inf.mpg.de Hans-Peter Seidel Max Planck Institute for Informatics Saarbrücken, Germany hpseidel@mpi-sb.mpg.de Nima Vahidi Ferdowsi Max Planck Institute for Informatics Saarbrücken, Germany nvahidi@mpi-inf.mpg.de Vahid Babaei Max Planck Institute for Informatics Saarbrücken, Germany vbabaei@mpi-inf.mpg.de |
| Pseudocode | No | The paper describes methods using equations and prose, but it does not include formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are available at: https://gitlab.mpi-klsb.mpg.de/nansari/autoinverse |
| Open Datasets | Yes | Our code and data are available at: https://gitlab.mpi-klsb.mpg.de/nansari/autoinverse. The training data consists of 10,000 pairs of samples generated by randomly sampling the NFP (Multi-joint robot). All networks in the ensemble NFP are trained on 40,000 printed patches [1] (Spectral printer). The training data consists of 50,000 samples queried by random sampling the actuation with an expansion ratios between -0.2 and 0.2 [33] (Soft robot). |
| Dataset Splits | No | The paper states 'Typically, we use 10% of the target performance for tuning our inverse methods,' implying a tuning/validation phase, but it does not provide explicit train/validation/test dataset splits (percentages or counts) for the underlying data samples themselves. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU/GPU models, memory, or other computational specifications used for running experiments. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with their version numbers required to reproduce the experiments. |
| Experiment Setup | Yes | We introduce α and β as hyperparameters to adjust the relative significance of aleatoric and epistemic uncertainties, respectively. We tune these parameters for 3 different sets of values for {α, β}: {{0.1, 1}, {1, 10}, {10, 100}}. All methods except MINI have around 3 million parameters. We inject Gaussian noise N(0, 0.1) to the spectrum of the samples with more than 0.4 LC density. |