DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.008
Publication date: 9th January 2023
Although artificial deep neural networks promise revolutionary performance advantages on data-heavy and intractable workloads, their pre-deployment training on present digital hardware is already energy-limited and economics-limited. Prevailing backpropagation training algorithms are generally incompatible with emerging post-digital hardware because of mismatches between the requirement for exact calculations and the inherent imprecision of analog hardware, thereby needing innovations and co-design at all layers of a computing stack. Here, we introduce a biologically plausible activity-difference-based training demonstrated on co-designed integrated analog memristor Hopfield networks - memristor activity-difference energy minimization (MADEM). MADEM treats the network parameters as a constrained optimization problem, and numerically calculates local gradients using behavioral differences in the same hardware targeted by the training. Thus, MADEM is highly suited for and exploits analog hardware, and promises over four orders of magnitude energy advantages compared to digital approaches for scaled up problem sizes.