New Efficient Multi-Spike Learning for Fast Processing and Robust Learning

Authors: Shenglan Li, Qiang Yu4650-4657

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, experiments are conducted to evaluate the performance of our methods. Firstly, we give the details of our default settings. Next, we test the effects of different initial setups, followed by experiments on multi-classification and feature extraction. Finally, we examine performance of our learning rules on some real-world datasets.
Researcher Affiliation Academia 1Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
Pseudocode Yes Algorithm 1 Event-driven computation scheme
Open Source Code No No explicit statement about releasing source code or a link to a code repository was found.
Open Datasets Yes In this part, three datasets are selected from UCI repositories (Asuncion and Newman 2007) and are used in this experiment. [...] Here, a more complex dataset, MNIST, is used to evaluate the performance of our EML rule. The MNIST dataset contains a large number of handwritten digits from 0 to 9, where each example has an image size of 28 28 pixels (Larochelle et al. 2007).
Dataset Splits No 60% samples are used as training while the rest as the test.
Hardware Specification Yes All experiments were conducted on a platform of Intel E5-2620@2.10GHz
Software Dependencies No No specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks) were mentioned.
Experiment Setup Yes The neuron is connected with N afferents, and each one fires at a Poisson rate of rin = 4 Hz over a time window T. We set N = 500 and T = 500 ms. The initial weights are drawn from a random Gaussion distribution with both mean and standard deviation being set as 0.01. Additionally, we set ϑ = 1 and λ = 0.0001. Parameter setups different from the default would be stated otherwise.