DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.006
Publication date: 9th January 2023
Stochastic neural networks have become the state-of-the-art approach for solving problems in machine learning, information theory, and statistics. The critical operation in such networks is a stochastic dot product. While there have been many demonstrations of dot-product circuits and, separately, stochastic neurons, the efficient hardware implementation combining both functionalities is still missing. In my talk, I will discuss our recent work [1], [2], [3] addressing this need. I will start with a discussion of compact, fast, energy-efficient, and scalable stochastic dot-product circuits based on passively integrated metal-oxide memristors. The high performance of such circuits is due to mixed-signal implementation enabling energy-saving in-memory computing. The stochastic functionality is achieved by operating the circuit in a low signal-to-noise ratio regime by utilizing the circuit’s intrinsic and/or extrinsic noise to the memory cell array. I will then discuss several application demonstrations based on passively integrated TiO2-x memristors, including Hopfield networks for solving optimization problems and a Boltzmann machine. In the proposed circuits, the neuron outputs can be selectively scaled, which allows adjusting coupling between the neurons and/or controlling the signal-to-noise ratio at runtime without the need to rewrite the memory weights. This feature enables, for example, efficient implementation of different annealing approaches that improve the solution quality for the solved optimization problems. I will conclude my talk with a comparison of the proposed approach to other solutions based on conventional and emerging technologies and a discussion of the important future work.