Differentiability and Optimization of Multiparameter Persistent Homology

Authors: Luis Scoccola, Siddharth Setlur, David Loiseaux, Mathieu Carrière, Steve Oudot

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement the theory with numerical experiments supporting the idea that optimizing multiparameter homological descriptors can lead to improved performances compared to optimizing one-parameter descriptors, even when using the simplest and most efficiently computable multiparameter descriptors.
Researcher Affiliation Academia 1Mathematical Institute, University of Oxford, UK 2Department of Mathematics, ETH Z urich, Switzerland 3Data Shape, Centre Inria d Universit e Cˆote d Azur, France 4Geomeri X, Inria Saclay and Ecole polytechnique, Paris, France.
Pseudocode No The paper does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes We use the implementation in (Loiseaux & Schreiber, 2024). Reference: Loiseaux, D. and Schreiber, H. multipers: Multiparameter persistence for machine learning. https://github.com/DavidLapous/multipers, 2024.
Open Datasets Yes The point cloud data consist of two interlaced circles with background noise embedded in R9 (Figure 7), similar to the data used in (Carri ere et al., 2021). The paper also lists datasets 'ENZYMES IMDB-B IMDB-M MUTAG' in Table 1.
Dataset Splits No The paper mentions 'performance is computed over 10 train/test folds' for graph classification, which defines train/test splits, but does not explicitly detail a separate validation split or its size for any experiment.
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments.
Software Dependencies No The paper mentions 'Adam optimizer', 'Re LU activation functions', and specific neural network architectures (GCN, GIN, Graph Res Net, Graph Dense Net), but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks.
Experiment Setup Yes We use a simple autoencoder architectures with both encoders and decoders made of three layers of 32 neurons, with the first two layers followed by Re LU activation and batch normalization. ...Optimization is performed with Adam optimizer, learning rate 0.01, and 1000 epochs. All graph architectures have four layers with 256 neurons... GNNs are trained during 200 epochs with Adam optimizer with learning rate 0.001