Nature-Inspired Local Propagation

Authors: Alessandro Betti, Marco Gori

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we report the application of the Hamiltonian Sign Flip strategy to the classic Linear Quadratic Tracking (LQT) problem by using a recurrent neural network based on a fully-connected digraph. The purpose of the reported experiments is to validate the HSF policy, which is in fact of crucial importance in order to exploit the power of the local propagation presented in the paper, since the proposed policy enables on-line processing.
Researcher Affiliation Academia Alessandro Betti IMT School for Advanced Studies Lucca, Italy alessandro.betti@imtlucca.it Marco Gori DIISM University of Siena Siena, Italy marco.gori@unisi.it
Pseudocode Yes Algorithm 1 Hamiltonian Sign Flip. In red the change of signs due to HSF. The locality of the method is evident from the loop on time t while the spatial locality depends on the structure of each update rule for the states and costates. (what we propose is valid also for unevenly spaced data).
Open Source Code No The paper does not include an explicit statement in its main text or appendices providing a link to open-source code for the described methodology. While the NeurIPS checklist indicates code is provided, this information is not found within the paper's content.
Open Datasets No In this experiment we used a sinusoidal target... It is composed of patching intervals with cosine functions with constants. This indicates a generated signal, not a publicly available dataset with concrete access information or citation.
Dataset Splits No The paper describes temporal horizons and discretizations of signals (e.g., 'temporal horizon [0, T]', 'fixed temporal resolution τ'), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split citations) that would be needed for reproducibility.
Hardware Specification No The paper states in the NeurIPS checklist that 'Computing requirements of the experiment are so modest that any modern laptop can sustain it' but does not provide specific hardware details (e.g., exact CPU/GPU models, memory, or cloud instance types) within its main content or appendices.
Software Dependencies No The paper refers to 'Euler’s discretization' as the numerical method used, but it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x) that would be needed for replication.
Experiment Setup Yes In Fig. 1 2 we can appreciate the effect of the increment of the accuracy term. Figure 1: Recurrent net with 5 neurons, q = 10 (accuracy term), rw = 1 (weight regularization term), r = 0.1 (derivative of the weight term). Figure 2: Recurrent net with 5 neurons, q = 1000 (accuracy term), rw = 1 (weight regularization term), r = 0.1 (derivative of the weight term). Figure 3: Tracking a highly-unpredictable signal: number of neurons: 5, q = 100 (accuracy), weight reg = 1, derivative of weight reg = 0.1. Algorithm 1 provides a detailed explanation of the Hamiltonian Sign Flip method... Appendix E for both architectural and algorithmic details.