Learning on Arbitrary Graph Topologies via Predictive Coding

Authors: Tommaso Salvatori, Luca Pinchetti, Beren Millidge, Yuhang Song, Tianyi Bao, Rafal Bogacz, Thomas Lukasiewicz

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP.
Researcher Affiliation Academia Tommaso Salvatori1, Luca Pinchetti1, Beren Millidge2 Yuhang Song1,2, Tianyi Bao1 Rafal Bogacz2 Thomas Lukasiewicz3,1 1 Department of Computer Science, University of Oxford, UK 2 MRC Brain Network Dynamics Unit, University of Oxford, UK 3 Institute of Logic and Computation, TU Wien, Austria
Pseudocode No The paper describes the inference and weight update rules using mathematical equations (Eq. 3 and 4) and explanatory text, but it does not provide these as formal pseudocode blocks or algorithms.
Open Source Code No The paper does not provide any explicit statement about releasing source code, nor does it include links to a code repository.
Open Datasets Yes We use MNIST and Fashion MNIST [30], fixing the first d nodes to the data point, and show how to perform the tasks of generation, denoising, reconstruction (without and with labels), and classification by querying the PC graph as described in Section 2.
Dataset Splits No The paper mentions 'training point', 'training set', and 'test image' (e.g., 'When presented with a training point s taken from a training set...', 'We provide the PC graph with half of a test image...'), implying the use of distinct sets. However, it does not provide specific details on the dataset splits (e.g., percentages, sample counts, or explicit mention of a validation set and its size/proportion) used for reproducibility.
Hardware Specification No The paper mentions 'GPU computing support by Scan Computers International Ltd.' and 'the EPSRC-funded Tier 2 facility JADE (EP/P020275/1)'. However, it does not specify concrete hardware details such as exact GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup No The paper states: 'Further details about other hyperparameters are given in the supplementary material.' and 'We performed a search across learning rates γ and , and on the number of iterations per batch T. More details are given in the supplementary material, as well as a long discussion on how different parameters influence the final performance of the architecture.' This indicates that specific experimental setup details like hyperparameters are not present in the main text.