On the Representation Power of Set Pooling Networks
Authors: Christian Bueno, Alan Hylton
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical findings are supported by empirical results on various set learning tasks. Finally, we empirically validate our theoretical findings through experiments on diverse set learning tasks, including set classification, set anomaly detection, and point cloud classification. |
| Researcher Affiliation | Academia | The authors are listed as "Anonymous Authors", therefore, no clear institutional affiliations are provided to classify the affiliation type. |
| Pseudocode | No | The paper describes theoretical constructions and experimental procedures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the methodology described. |
| Open Datasets | Yes | Datasets: We use the following datasets: MNIST digits subsets, KDD99, ModelNet40. |
| Dataset Splits | Yes | We use a standard train-validation-test split (80/10/10) for all datasets. |
| Hardware Specification | Yes | All experiments are run on a single NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer, but does not provide specific version numbers for software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | Models are trained using the Adam optimizer with a learning rate of 0.001 and a batch size of 64. Training runs for 100 epochs. |