Set2Graph: Learning Graphs From Sets
Authors: Hadar Serviansky, Nimrod Segol, Jonathan Shlomi, Kyle Cranmer, Eilam Gross, Haggai Maron, Yaron Lipman
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Testing these models on different machine learning tasks, mainly an application to particle physics, we find them favorable to existing baselines. We demonstrate our model produces state of the art results on this task compared to relevant baselines. We also experimented with another set-to-2-edges problem of Delaunay triangulation, and a set-to-3-edges problem of 3D convex hull, in which we also achieve superior performances to the baselines. |
| Researcher Affiliation | Collaboration | 1Weizmann Institute of Science 2New York University 3NVIDIA Research |
| Pseudocode | No | The paper describes the model components and their mathematical formulations but does not include any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about the release of source code for the described methodology. |
| Open Datasets | No | The paper describes the generation of custom datasets (simulated particle physics data, Delaunay triangulations, and convex hull point sets) and indicates their use in experiments, but it does not provide specific links, DOIs, formal citations, or explicit statements of public availability for these datasets. |
| Dataset Splits | Yes | The generated sets are small, ranging from 2 to 14 elements each, with around 0.9M sets divided to train/val/test using the ratios 0.6/0.2/0.2. We generated 20k point set samples as a training set, 2k for validation and another 2k for test set. |
| Hardware Specification | No | The paper does not provide specific details on the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper discusses various models and architectures (e.g., Deep Sets, MLP, GNN) but does not provide specific version numbers for any software dependencies, libraries, or frameworks used for implementation or experimentation. |
| Experiment Setup | Yes | for F2, φ is implemented using Deep Sets [44] with 5 layers and output dimension d1 {5, 80}; ψ is implemented with an MLP, m, with {2, 3} layers with input dimension d2 defined by d1 and β. All models are trained to minimize the F1 score. Training was stopped after 100 epochs. We performed each experiment 11 times with different random initializations. |