Neural Snowflakes: Universal Latent Graph Inference via Trainable Latent Geometries
Authors: Haitz Sáez de Ocáriz Borde, Anastasis Kratsios
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct synthetic experiments to demonstrate the superior metric learning capabilities of neural snowflakes when compared to more familiar spaces like Euclidean space. Additionally, we carry out latent graph inference experiments on graph benchmarks. |
| Researcher Affiliation | Academia | Haitz Sáez de Ocáriz Borde University of Oxford Oxford, UK chri6704@ox.ac.uk Anastasis Kratsios Department of Mathematics Mc Master University and the Vector Institute Ontario, Canada kratsioa@mcmaster.ca |
| Pseudocode | No | The paper describes the neural snowflake architecture and provides mathematical equations (e.g., equation 3 for the iterative representation of f(t)), but it does not include explicitly labeled 'Pseudocode' or 'Algorithm' blocks or structured code-like procedures. |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We first present results from latent graph inference on the well-known Cora and Cite Seer homophilic graph benchmarks. ... We also include results for the Tadpole and Aerothermodynamics datasets used in Sáez de Ocáriz Borde et al. (2023c). |
| Dataset Splits | No | The paper mentions 'training and testing splits' in a general sense and uses 'well-known Cora and Cite Seer homophilic graph benchmarks' which typically have standard splits, but it does not provide specific percentages, sample counts, or explicit details about validation splits to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'GCNs' but does not provide specific version numbers for any software dependencies (e.g., libraries, frameworks, or programming languages). |
| Experiment Setup | Yes | We use a consistent latent space dimensionality of 8 and perform the Gumbel top-k trick for edge sampling with a k value of 7. The models all share the same latent space dimensionality, differing solely in their geometric characteristics. ... All other parameters, comprising network settings and training hyperparameters, remain unaltered. For all DGM modules, GCNs are employed as the underlying GNN diffusion layers. |