MADEM: Energy-Efficient Activity Difference Training of Deep Neural Networks
Suhas Kumar a
a Sandia National Labs, Livermore, Livermore, United States
Proceedings of Neuromorphic Materials, Devices, Circuits and Systems (NeuMatDeCaS)
VALÈNCIA, Spain, 2023 January 23rd - 25th
Organizers: Rohit Abraham John, Irem Boybat, Jason Eshraghian and Simone Fabiano
Invited Speaker, Suhas Kumar, presentation 008
DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.008
Publication date: 9th January 2023

Although artificial deep neural networks promise revolutionary performance advantages on data-heavy and intractable workloads, their pre-deployment training on present digital hardware is already energy-limited and economics-limited. Prevailing backpropagation training algorithms are generally incompatible with emerging post-digital hardware because of mismatches between the requirement for exact calculations and the inherent imprecision of analog hardware, thereby needing innovations and co-design at all layers of a computing stack. Here, we introduce a biologically plausible activity-difference-based training demonstrated on co-designed integrated analog memristor Hopfield networks - memristor activity-difference energy minimization (MADEM). MADEM treats the network parameters as a constrained optimization problem, and numerically calculates local gradients using behavioral differences in the same hardware targeted by the training. Thus, MADEM is highly suited for and exploits analog hardware, and promises over four orders of magnitude energy advantages compared to digital approaches for scaled up problem sizes.

© FUNDACIO DE LA COMUNITAT VALENCIANA SCITO
We use our own and third party cookies for analysing and measuring usage of our website to improve our services. If you continue browsing, we consider accepting its use. You can check our Cookies Policy in which you will also find how to configure your web browser for the use of cookies. More info