Powerset Convolutional Neural Networks
Authors: Chris Wendler, Markus Püschel, Dan Alistarh
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Prototypical experiments with several set function classification tasks on synthetic datasets and on datasets derived from real-world hypergraphs demonstrate the potential of our new powerset CNNs. |
| Researcher Affiliation | Academia | Chris Wendler Department of Computer Science ETH Zurich, Switzerland chris.wendler@inf.ethz.ch Dan Alistarh IST Austria dan.alistarh@ist.ac.at Markus Püschel Department of Computer Science ETH Zurich, Switzerland pueschel@inf.ethz.ch |
| Pseudocode | No | The paper describes methods and processes but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Sample implementations are provided at https://github.com/chrislybaer/Powerset-CNN. |
| Open Datasets | Yes | Reference [5] provides 19 real-world hypergraph datasets. Each dataset is a hypergraph evolving over time. An example is the DBLP coauthorship hypergraph in which vertices are authors and hyperedges are publications. ... A. R. Benson, R. Abebe, M. T. Schaub, A. Jadbabaie, and J. Kleinberg. Simplicial closure and higher-order link prediction. Proc. National Academy of Sciences, 115(48):E11221 E11230, 2018. |
| Dataset Splits | No | The paper states, 'We use 80% of the samples for training, and the remaining 20% for testing.' It does not explicitly mention a validation set split or percentage, only train and test. |
| Hardware Specification | Yes | All our experiments were run on a server with an Intel(R) Xeon(R) CPU @ 2.00GHz with four NVIDIA Tesla T4 GPUs. |
| Software Dependencies | No | The paper mentions 'We implemented the powerset convolutional and pooling layers in Tensorflow [1]' but does not provide a specific version number for Tensorflow or any other software dependency. |
| Experiment Setup | Yes | For all models we use 32 output channels per convolutional layer and ReLU [32] non-linearities. We train all models for 100 epochs (passes through the training data) using the Adam optimizer [24] with initial learning rate 0.001 and an exponential learning rate decay factor of 0.95. The learning rate decays after every epoch. We use batches of size 128 and the cross entropy loss. |