DOI: https://doi.org/10.29363/nanoge.neumatdecas.2023.074
Publication date: 9th January 2023
The next step in the evolution of specialized hardware for AI is rooted in addressing the performance efficiency loss from data movement between computational units and memory. This can be achieved through analog in-memory computing which eliminates the Von Neuman bottleneck and allows highly parallel computations directly in memory using memristive crossbar arrays. Although memristive crossbar arrays are a promising future Analog technology for accelerating AI workloads, their inherent noise, and non-idealities demand for improved algorithmic solutions.In this talk, we introduce the IBM Analog Hardware Acceleration Kit [1], a first of a kind open-source toolkit to simulate crossbar arrays from within PyTorch, to conveniently estimate the impact of material properties and non-idealities on the accuracy for arbitrary ANNs (freely available at https://github.com/IBM/aihwkit). This platform allows understanding, evaluating, and experimenting with emerging analog AI accelerators. Our roadmap and capabilities include algorithmic innovations from IBM Research around hardware-aware training, mixed-precision training, advanced analog training optimizers using parallel rank-update in analog, and allowing inference on real research Phase-change memory (PCM)-based analog AI chip prototypes, as well as allowing the research community to extend the toolkit with new devices, analog presets, algorithms, etc. We will show an interactive demo of how the toolkit can be used online though our web front-end cloud composer. The composer provides a set of templates and a no-code experience to introduce the concepts of analog AI, configure experiments, and launch training and inference experiments.