DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.049
Publication date: 9th January 2023
Neuromorphic computing is widely regarded as a constraint that increases AI efficiency but trades off proficiency, such as accuracy. However, this separates neuromorphic use cases from mainstream AI, into niches where proficiency is not crucial. To the contrary, our recent work [1-5] shows that neuromorphic mechanisms do not constrain but rather they expand and improve mainstream AI in conventional measures of proficiency, while also improving efficiency. Specifically, we show that neuromorphic algorithms can outperform the state of the art of the broader, conventional machine learning field, beyond neuromorphic niches, in classification accuracy, robustness, inference speed, learning speed, and task reward. We show these in tasks such as keyword spotting, ImageNet classification, playing Atari games, controlling robots, many of which are usually out of reach for neuromorphic models. We test these in settings of online adaptation, supervised, unsupervised, self-supervised or reinforcement learning, meta-learning, and non-backprop-based deep learning. This is achieved by exploiting also other biological mechanisms than spikes. The biological mechanisms that we employ include spikes, Hebbian plasticity, short-term plasticity, and efference copies. The algorithms remain suitable for efficient neuromorphic hardware, and include a new method for decreasing the power consumption of weighting operations in analog synapses. Based on these results, we propose that exploiting the breadth of neuromorphic mechanisms, besides spikes, in suitable applications, is a path towards making neuromorphic the mainstream type of AI computing.