Equivalence of ReLU and Single-Spike Neural Networks
Ana Stanojevic a b, Stanislaw Wozniak a, Guillaume Bellec b, Giovanni Cherubini a, Angeliki Pantazi a, Wulfram Gerstner b
a IBM Research Europe – Zurich, Säumerstrasse 4, Ruschlikon, Switzerland
b Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne (EPFL)
Proceedings of Neuromorphic Materials, Devices, Circuits and Systems (NeuMatDeCaS)
VALÈNCIA, Spain, 2023 January 23rd - 25th
Organizers: Rohit Abraham John, Irem Boybat, Jason Eshraghian and Simone Fabiano
Contributed talk, Ana Stanojevic, presentation 051
DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.051
Publication date: 9th January 2023

Artificial neural networks (ANNs) with rectified linear units (ReLUs) are standard in solving many artificial intelligence (AI) tasks and pretrained weights are often available. Spiking neural networks (SNNs) offer a potential for energy efficient neuromorphic implementation, however the training of such networks is challenging. In this work we show that fully-connected ReLU and single-spike SNNs are equivalent. Let’s assume that we have pretrained weights and biases of a fully-connected ReLU network with L layers. The real value coming to the input of the ANN is converted to the spiking time which is sent to the spiking neural network implementing a specific non-leaky integrate-and-fire dynamics with linear postsynaptic potential and positive integration bias. Each neuron spikes at time when its membrane potential reaches the predefined threshold after which a very long refractory period is assumed making sure that the neuron stays silent.  All neurons that don’t spike before some observation time are forced to spike. The parameters of SNN are set according to our mapping function such that the activity of the neuron in ReLU network can be recovered from the timing of the spike from the corresponding neuron in the SNN. Due to the theoretical equivalence between the two networks it is possible to perform the energy efficient SNN classification of the MNIST dataset with the exact same perfomance as pretrained ReLU network and without any additional training or fine-tuning. 

© FUNDACIO DE LA COMUNITAT VALENCIANA SCITO
We use our own and third party cookies for analysing and measuring usage of our website to improve our services. If you continue browsing, we consider accepting its use. You can check our Cookies Policy in which you will also find how to configure your web browser for the use of cookies. More info