Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Extending Path-Dependent NJ-ODEs to Noisy Observations and a Dependent Observation Framework

Authors: William Andersson, Jakob Heiss, Florian Krach, Josef Teichmann

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work we discuss two extensions to lift these restrictions and provide theoretical guarantees as well as empirical examples for them. In Section 6 we show empirically that the PD-NJ-ODE performs well in these generalised settings.
Researcher Affiliation Academia William Andersson EMAIL Department of Computer Science ETH Zurich, Switzerland Jakob Heiss EMAIL Department of Mathematics ETH Zurich, Switzerland Florian Krach EMAIL Department of Mathematics ETH Zurich, Switzerland Josef Teichmann EMAIL Department of Mathematics ETH Zurich, Switzerland
Pseudocode Yes In Algorithm 1 we present the forward pass of the PD-NJ-ODE, which can be used for training as well as for evaluating the model. Algorithms 2 and 3 present the standard loss function and the loss function for noisy observations, respectively. The training for the model follows the standard neural network training approach via stochastic gradient descent (SGD) and is presented in Algorithm 4.
Open Source Code Yes The code with all new experiments and those from Krach et al. (2022) is available at https://github.com/Florian Krach/PD-NJODE.
Open Datasets Yes For example the Physionet dataset (Goldberger et al., 2000) is exactly such a dataset... Details on the standard Physionet dataset are given in Herrera et al. (2021, Appendix F.5.3).
Dataset Splits Yes We sample 20 000 paths of which 80% are used as training set and the remaining 20% as test set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It makes no mention of hardware related to the experimental setup in the provided sections.
Software Dependencies No The paper mentions 'iisignature package' and 'Py Torch (Paszke et al., 2019)' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Architecture. We use the PD-NJ-ODE with the following architecture. The latent dimension is d H = 100, the readout network is a linear map and the other 2 neural networks have the same structure of 1 hidden layer with Re LU activation function and 100 nodes. The signature is used up to truncation level 3, the encoder is recurrent and the decoder uses a residual connection. Training. We use the Adam optimizer with the standard choices β = (0.9, 0.999), weight decay of 0.0005 and learning rate 0.001. Moreover, a dropout rate of 0.1 is used for every layer and training is performed with a mini-batch size of 200 for 200 epochs.