Proceedings of Materials for Sustainable Development Conference (MAT-SUS) (NFM22)
DOI: https://doi.org/10.29363/nanoge.nfm.2022.120
Publication date: 11th July 2022
Our brain relies on spiking neural networks for rapid and ultra-low-power information processing. To build artificial intelligence that leverages spiking networks with comparable efficiency requires instantiating vast spiking network models on neuromorphic hardware accelerators. However, direct end-to-end training of spiking neural networks remains challenging due to the non-differentiability of spiking neuron models.
Surrogate gradients have emerged as a widespread solution to this problem. In my talk, I will briefly introduce the notion of surrogate gradient learning, showcase its robustness, and illustrate its self-calibration capabilities on analog neuromorphic hardware. I will further discuss the importance of network initialization on deep spiking neural network training and introduce effective bio-inspired initialization strategies. Finally, I will sketch how biologically plausible online learning rules naturally emerge through local approximations of surrogate gradients that exploit block-sparse Jacobians. This step is essential for learning from long temporal sequences and paves the way for exciting future on-chip online learning applications.