Publication date: 9th January 2023
Cell sorting and counting technology has been broadly adopted for medical diagnosis, cell-based therapy, and biological research. Microscopy operates with image capture that is subject to an extremely constrained field-of-view, and even slow-moving targets may undergo motion blur, ghosting, and other movement-induced artifacts, which will ultimately degrade performance in developing machine learning models to perform cell sorting, detection, and tracking. Frame-based sensors are especially susceptible to these issues, and it is highly costly to overcome them with modern but conventional CMOS sensing technologies. We provide an early demonstration of a proof-of-concept system, with the overarching goals of curating a neuromorphic imaging cytometry (NIC) dataset, multimodal analysis techniques, and associated deep-learning models. We are working towards this goal by utilising an event-based camera to perform flow-imaging cytometry to capture cells in motion and train neural networks capable of identifying their morphology (size and shape) and identities. We propose that implementing a neuromorphic sensory system or developing a new class of event-based cameras customised for this purpose with our sorting strategy will unbind the applications from the constraints of framerate and provide a cost-efficient, reproducible and high-throughput imaging mechanism. While we target this early work for cell sorting, this novel idea is the first stepping-stone towards a new type of high-throughput and automated high-content image analysis system and screening instrument.