Recursive Regression with Neural Networks: Approximating the HJI PDE Solution

Authors: Vicenç Rubies Royo, Claire Tomlin

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we present a few 2-dimensional experiments to demonstrate the validity of our claim and the effectiveness of the algorithm. To measure the performance of the algorithm, we compare the difference between our computed approximation and the true analytical solution.
Researcher Affiliation Academia Vicenc R ubies Royo, Claire Tomlin Department of Electrical Engineering and Computer Sciences UC Berkeley Berkeley, California, USA vrubies@berkeley.edu, tomlin@berkeley.edu
Pseudocode Yes Algorithm 1 Recursive Regression via SGD with Momentum
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets No The paper does not provide concrete access information for a publicly available or open dataset. Instead, it describes a process of self-generated data for training.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It uses a self-generated data approach where points are sampled for regression and error computation.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models or processor types used for running its experiments, only mentioning the use of multiple threads and different machines.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes For this experiment, a feedforward neural network with a single hidden layer of 10 units and sigmoid activation functions was used. The number of points sampled was chosen to be N = 500, uniformly picked over the set S := {(x1, x2)|x1, x2 [ 5, 5]} and over t [ T, 0]. The batches were picked to be of size K = 10, momentum decay γ = 0.95 and learning rate η = 0.1. The interval to renew the regression points was chosen to be 1000 iterations and the program was halted at 500,000 iterations.