DOI: https://doi.org/10.29363/nanoge.neuronics.2024.023
Publication date: 18th December 2023
The growing demand for data-driven applications in every layer of society results in traditional von-Neumann architecture with disjoint memory and processing to be at its limit, as it suffers from huge memory latency and limited data bandwidth, which is further intensified by the scaling limits of silicon transistors.
A new roadmap for beyond CMOS is under discussion and this is driving interest in utilizing unique characteristics of emerging devices for information processing and memory[1]. One of the challenges is to identify emerging devices that can implement computing functions and architectures more efficiently than CMOS and Boolean logic.
Neuro-inspired chips that mimic the behavior and the efficiency of biological networks of neurons and synapses are receiving particular attention, as most of the AI related problems rely on neural network mathematical models. Reducing data movement and parallelizing vector matrix multiplications are key aspects in establishing more energy-efficient AI hardware. Device technologies based on metal-oxide HfO2 and VO2, as well as phase change memories (PCM) based on GeSbTe and Sb2Te3 are perfect candidates for the realization of synapse and neuronal functions in these artificial neural networks.
In this talk I will give an overview on our approach to accelerate deep neural networks for inference and training applications using Resistive Random-Access Memories (ReRAM)[2] and PCM[3], as well as on the use of oscillating neural networks based on VO2 oscillators for solving optimization problems[4].