DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.051
Publication date: 9th January 2023
Artificial neural networks (ANNs) with rectified linear units (ReLUs) are standard in solving many artificial intelligence (AI) tasks and pretrained weights are often available. Spiking neural networks (SNNs) offer a potential for energy efficient neuromorphic implementation, however the training of such networks is challenging. In this work we show that fully-connected ReLU and single-spike SNNs are equivalent. Let’s assume that we have pretrained weights and biases of a fully-connected ReLU network with L layers. The real value coming to the input of the ANN is converted to the spiking time which is sent to the spiking neural network implementing a specific non-leaky integrate-and-fire dynamics with linear postsynaptic potential and positive integration bias. Each neuron spikes at time when its membrane potential reaches the predefined threshold after which a very long refractory period is assumed making sure that the neuron stays silent. All neurons that don’t spike before some observation time are forced to spike. The parameters of SNN are set according to our mapping function such that the activity of the neuron in ReLU network can be recovered from the timing of the spike from the corresponding neuron in the SNN. Due to the theoretical equivalence between the two networks it is possible to perform the energy efficient SNN classification of the MNIST dataset with the exact same perfomance as pretrained ReLU network and without any additional training or fine-tuning.