Deep Predictive Coding Network for Object Recognition

Authors: Haiguang Wen, Kuan Han, Junxing Shi, Yizhen Zhang, Eugenio Culurciello, Zhongming Liu

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental With benchmark datasets (CIFAR-10/100, SVHN, and MNIST), PCN was found to always outperform its feedforward-only counterpart: a model without any mechanism for recurrent dynamics, and its performance tended to improve given more cycles of computation over time.
Researcher Affiliation Academia 1School of Electrical and Computer Engineering, Purdue University 2Weldon School of Biomedical Engineering, Purdue University.
Pseudocode Yes Algorithm 1 Deep Predictive Coding Network
Open Source Code No The paper does not provide an explicit statement or link indicating the availability of its source code.
Open Datasets Yes We trained and tested PCN for image classification with benchmark datasets: CIFAR-10 (Krizhevsky & Hinton, 2009) CIFAR-100 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011) and MNIST (Le Cun et al., 1998).
Dataset Splits Yes The hyper-parameters for learning were set based on validation with 10,000 images in the training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No We used Py Torch to implement, train, and test the models described above. (No version number for PyTorch is provided).
Experiment Setup Yes We used mini-batch gradient decent to train PCN (or CNN) with a weight decay of 0.0005 and a momentum of 0.9. The learning rate was initialized as 0.01 and was divided by 10 when the error reached the plateau after training for 80, 140, 200 epochs. We stopped after 250 epochs. ... We used the Adam (Kingma & Ba, 2014) optimization with a weight decay of 0.0005 and an initial learning rate of 0.001 for a 20-10-10 epoch schedule. The exponential decay rates for the first and second moment estimates were 0.9 and 0.999, respectively.