DOI: https://doi.org/10.29363/nanoge.matnec.2022.031
Publication date: 23rd February 2022
With the expansion of edge devices in our daily lives, the need for efficient and adaptive embedded systems is growing fast. However, the current Von Neumann architectures and AI models reach their technical and intellectual limits, and fail to address the current challenges of embedded intelligent systems. This is why we follow a brain-inspired computing approach based on the co-development of neuromorphic hardware and algorithms [1]. In particular, we focus on the local mechanisms of structural and synaptic plasticity that lead to the emergence of the global structure and function of the brain. Locality in synaptic plasticity means that all the information needed for the synapse update is locally available in time and space, i.e. based on the activity of the pre- and post-synaptic neurons that the synapse connects [2]. In particular, when implemented in parallel and distributed neuromorphic hardware, locality in time satisfies the real-time constraint of online learning and diminishes the memory overhead, while locality in space satisfies the energy-efficiency constraint of on-chip learning. In addition, local synaptic plasticity rules are inherently unsupervised, opening the possibility of continual adaptation. Nevertheless, the main drawback of local unsupervised learning is the limited performance on complex pattern recognition problems, because there is no explicit optimization of a global objective function as in gradient back-propagation. Therefore, we propose multimodal association as a biologically plausible and practical solution to improve the self-organizing system performance since, in contrast with labeled data, multiple sensory modalities (e.g. sight, sound, touch) are freely available in the real-world environment [3].