RID-Noise: Towards Robust Inverse Design under Noisy Environments

Authors: Jia-Qi Yang, Ke-Bin Fan, Hao Ma, De-Chuan Zhan4654-4661

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Further experiments on several real-world benchmark tasks with noises confirm that our method is more effective than other state-of-the-art inverse design methods. To evaluate the proposed model, we conduct a set of experiments to answer the following questions: Q1: How does RID-Noise perform on real-world inverse design problems compared to state-of-the-art methods?
Researcher Affiliation Academia 1 State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2 Research Institute of Superconductor Electronics (RISE) School of Electronic Science and Engineering, Nanjing University, Nanjing, China
Pseudocode Yes Algorithm 1: Estimating the RID-Noise Weights
Open Source Code Yes Code and supplementary is publicly available at https://github.com/Thyrix Yang/rid-noise-aaai22
Open Datasets Yes we select three benchmark tasks from previous research on inverse design: Kinematics: An articulated arm moves vertically along a rail and rotates at three joints, the inverse design problem is to find the angle parameters to achieve a given point (Ardizzone et al. 2019). Ballistics: A ball is thrown forward then lands on the ground, the inverse design problem is to find the angle, velocity and the position given the landing position (Kruse et al. 2021). Meta-Material: The goal of this task, is to design the radii and heights of four cylinders of a meta-material so that it produces a desired electromagnetic reflection spectrum (Ren, Padilla, and Malof 2020).
Dataset Splits Yes The r(xi) can be estimated for every (xi, yi) in the whole dataset with cross-validations. Algorithm 1: Estimating the RID-Noise Weights... Split dataset D evenly into k datasets {D1, D2, ..., Dk}. for i in {1, ..., k} do Training dataset Dt = {Dj, j = i}. Validation dataset Dv = Di.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using the "Adam optimizer" but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other specific libraries and their versions).
Experiment Setup Yes We use the Adam optimizer with learning rate tuned within [10 2, 10 3, 10 4], and weight decay tuned within [10 4, 10 5, 10 6, 10 7] for all methods. The network structures are tuned for each method, for example, the MLP layer number and layer width in Tandem, c VAE, c GAN, and NA method; type of coupling blocks of INN based methods such as INN-LL, INN-MMD, c INN; the clamp value of the coupling blocks of INN based methods. There are also some hyper-parameters for a specific method, such as the weights of bi-directional training in INN-MMD.