Dist2Cycle: A Simplicial Neural Network for Homology Localization

Authors: Alexandros D Keros, Vidit Nanda, Kartic Subr7133-7142

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report mean squared error (MSE) between the predicted and reference relative distances, with distances based on shortloop (Dey, Sun, and Wang 2010) acting as ground truth. We experimentally compare our method against a combinatorial baseline, hom emb (Chen and Meil a 2021).
Researcher Affiliation Academia Alexandros D. Keros1, Vidit Nanda2, Kartic Subr1 1The University of Edinburgh 2University of Oxford a.d.keros@sms.ed.ac.uk, nanda@maths.ox.ac.uk, ksubr@ed.ac.uk
Pseudocode No The paper describes the model using equations and text but does not provide a structured pseudocode or algorithm block.
Open Source Code Yes 1Code & models: https://github.com/alexdkeros/Dist2Cycle
Open Datasets No Our TORI datasets consist of Alpha complexes (Edelsbrunner 2010) that originate from considering snapshots of filtrations (Edelsbrunner and Harer 2010) on points sampled from tori manifolds of diverse topological characteristics, in 2 and 3 dimensions. We seek to capture richness of homological information, controlability in terms of scalability in the number of simplices and homology cycles, as well as ease of visualization. We first sampled 400 point clouds from randomly generated configurations of tori and pinched tori, with number of holes ranging from 1 to 5, to which Gaussian noise is added. We then constructed Alpha filtrations on the collection of point clouds...
Dataset Splits No The dataset is split into training (80%) and testing (20%) sets and the models were trained for 1000 epochs, with a mini-batch size of 5 complexes using an Intel Xeon E5-2630 v.4 processor, a TITAN-X 64GB GPU and 64GB of RAM, using CUDA 10.1.
Hardware Specification Yes The dataset is split into training (80%) and testing (20%) sets and the models were trained for 1000 epochs, with a mini-batch size of 5 complexes using an Intel Xeon E5-2630 v.4 processor, a TITAN-X 64GB GPU and 64GB of RAM, using CUDA 10.1.
Software Dependencies No The GNN model was implemented using the dgl library (Wang et al. 2019) with the Torch backend (Paszke et al. 2017). All simplicial and homology computations were handled by the Gudhi library (The GUDHI Project 2021). We apply a Laplacian smoothing post-processing step. Let x the output of the model, i.e. the inferred distances for each 1-simplex, and ˆL = D 1/2(D A)D 1/2 the normalized graph Laplacian of the 1-skeleton of the complex K, i.e. the underlying graph spanned by the 0 and 1-simplices of K. The signal at the simplices are smoothed using x = x ˆLx.
Experiment Setup Yes We used a GNN with 12 graph convolutional layers (for 2D as well as 3D), as described by Eq. (8), and 128 hidden units. We chose Leaky Re LU activations (φ in Eq. (5)) with negative slope r = 0.02 for the layers, and a hyperbolic tangent Tanh for the output. Neighbor activations are aggregated via a summation (L in Eq. (5)). Learnable weights undergo Kaiming uniform initialization (He et al. 2015). Finally, node features are the result of concatenating the betti numbers describing the homology of the link at each simplex, with its 5-dimensional spectral embedding. The dataset is split into training (80%) and testing (20%) sets and the models were trained for 1000 epochs, with a mini-batch size of 5 complexes using an Intel Xeon E5-2630 v.4 processor, a TITAN-X 64GB GPU and 64GB of RAM, using CUDA 10.1.