Porcupine Neural Networks: Approximating Neural Network Landscapes

Authors: Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, our theoretical and empirical results suggest that an unconstrained neural network can be approximated using a polynomially-large PNN.
Researcher Affiliation Academia Soheil Feizi Department of Computer Science University of Maryland, College Park sfeizi@cs.umd.edu Hamid Javadi Department of Electrical and Computer Engineering Rice University hrhakim@rice.edu Jesse Zhang Department of Electrical Engineering Stanford University jessez@stanford.edu David Tse Department of Electrical Engineering Stanford University dntse@stanford.edu
Pseudocode No The paper does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes We provide code for PNN experiments in the following link: https://github.com/jessemzhang/porcupine_neural_networks
Open Datasets Yes Next, we evaluate PNNs on MNSIT. We first trained a dense network on a subset of the MNIST handwritten digits dataset. Of the 10 types of 28x28 MNIST images, we only looked at images of 1 s and 2 s, assigning them the labels of y = 1 and y = 2, respectively. This resulted in n = 11, 649 training samples and 2, 167 test samples.
Dataset Splits No The paper specifies training and test samples for both synthetic and MNIST datasets but does not explicitly mention or detail a validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes We train the PNN via stochastic gradient descent using batches of size 100, 100 training epochs, no momentum, and a learning rate of 10 3 which decays every epoch at a rate of 0.95 every 390 epochs.