Learning nonlinear level sets for dimensionality reduction in function approximation
Authors: Guannan Zhang, Jiaxin Zhang, Jacob Hinkle
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We developed a Nonlinear Level-set Learning (NLL) method for dimensionality reduction in high-dimensional function approximation with small data. ... The NLL approach is demonstrated by applying it to three 2D functions and two 20D functions for showing the improved approximation accuracy with the use of nonlinear transformation, as well as to an 8D composite material design problem for optimizing the buckling-resistance performance of composite shells of rocket inter-stages. ... We evaluated our method using three 2D functions in 4.1 for visualizing the nonlinear capability, two 20D functions in 4.2 for comparing our method with brute-force neural networks, SIR and AS methods, as well as a composite material design problem in 4.3 for demonstrating the potential impact of our method on real-world engineering problems. ... The results for f4 and f5 are shown in Table 1 and 2, respectively. |
| Researcher Affiliation | Academia | Guannan Zhang Computer Science and Mathematics Division Oak Ridge National Laboratory zhangg@ornl.gov Jiaxin Zhang National Center for Computational Sciences Oak Ridge National Laboratory zhangj@ornl.gov Jacob Hinkle Computational Science and Engineering Division Oak Ridge National Laboratory hinklejd@ornl.gov |
| Pseudocode | No | The paper presents a mathematical formulation of the Rev Net model (Eq. 3) but does not include any clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Source code for the proposed NLL method is available in the supplemental material. |
| Open Datasets | No | The paper describes generating its own training data through simulations ('running expensive simulations', 'simplified FEM model') and does not provide access information (link, DOI, citation) to any publicly available or open datasets. |
| Dataset Splits | Yes | the training set included 121 uniformly distributed samples in Ω, and the validation set included 2000 uniformly distributed samples in Ω. ... We used various sizes of training data: 100, 500, 10,000, and we used another 10,000 samples as validation data. |
| Hardware Specification | Yes | The Rev Net in Eq. (3) with the new loss function in Eq. (11) was implemented in Py Torch 1.1 and tested on a 2014 i Mac Desktop with a 4 GHz Intel Core i7 CPU and 32 GB DDR3 memory. |
| Software Dependencies | Yes | The Rev Net in Eq. (3) with the new loss function in Eq. (11) was implemented in Py Torch 1.1... The implementation of both networks was based on the neural network toolbox in Matlab 2017a. |
| Experiment Setup | Yes | Specifically, u and v in Eq. (3) were 1D variables (as the total dimension is 2); the number of layers was N = 10, i.e., 10 blocks of the form in Eq. (3) were connected; Kn,1, Kn,2 were 2 1 matrices; bn,1, bn,2 are 2D vectors; the activation function was tanh(); the time step h was set to 0.25; stochastic gradient descent method was used to train the Rev Net with the learning rate being 0.01; no regularization was applied to the network parameters; the weights in Eq. (9) was set to ω = (0, 1); λ = 1 in the loss function in Eq. (11); |