Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Neural Persistence Dynamics

Authors: Sebastian Zeng, Florian Graf, Martin Uray, Stefan Huber, Roland Kwitt

NeurIPS 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Various (ablation) experiments not only demonstrate the relevance of each model component but provide compelling empirical evidence that our proposed model Neural Persistence Dynamics substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
Researcher Affiliation Collaboration University of Salzburg, Austria Josef Ressel Centre for Intelligent and Secure Industrial Automation, University of Applied Sciences, Salzburg, Austria
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our publicly available reference implementation can be found at https://github.com/plus-rkwitt/neural_persistence_dynamics.
Open Datasets Yes Our publicly available reference implementation can be found at https://github.com/plus-rkwitt/neural_persistence_dynamics. ... For reproducibility, we will release the simulation data publicly.
Dataset Splits Yes We randomly partition each dataset into five training/testing splits of size 80/20.
Hardware Specification Yes All experiments were run on an Ubuntu Linux system (22.04), running kernel 5.15.0-100-generic, with 34 Intel Core i9-10980XE CPU @ 3.00GHz cores, 128 GB of main memory, and two NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies No The paper mentions software components like m TAN architecture, Euler method, ADAM, and Ripser++, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes Each model is trained for 150 epochs using ADAM [31] (with a weight decay of 0.001), starting at a learning rate of 0.001 (decaying according to a cosine annealing schedule) and MSE as a reconstruction (i.e., to evaluate the first term in Eq. (4)) and regression loss.