On Universal Equivariant Set Networks
Authors: Nimrod Segol, Yaron Lipman
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Lastly, we provide numerical experiments validating the theoretical results and comparing different permutation equivariant models. |
| Researcher Affiliation | Academia | Nimrod Segol & Yaron Lipman Department of Computer Science and Applied Mathematics Weizmann Institute of Science Rehovot, Israel {nimrod.segol,yaron.lipman}@weizmann.ac.il |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code can be found at https://github.com/Nimrod Segol/On-Universal-Equivariant-Set-Networks |
| Open Datasets | Yes | We used the Model Net dataset (Wu et al., 2015) |
| Dataset Splits | Yes | We drew 10k training examples and 1k test examples i.i.d. from a N( 1/2, 1) distribution (per entry of X). |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | Yes | We implemented the experiments in Pytorch Paszke et al. (2017) with the Adam Kingma & Ba (2014) optimizer for learning. |
| Experiment Setup | Yes | For the classification we used the cross entropy loss and trained for 150 epochs with learning rate 0.001, learning rate decay of 0.5 every 100 epochs and batch size 32. For the quadratic function regression we trained for 150 epochs with leaning rate of 0.001, learning rate decay 0.1 every 50 epochs and batch size 64; for the regression to the leading eigen vector we trained for 50 epochs with leaning rate of 0.001 and batch size 32. To regress to the output of a single graph convolution layer we trained for 200 epochs with leaning rate of 0.001 and batch size 32. |