The DynapCNN is an ultra-low power, event-driven neuromorphic processor chip for spiking neural networks that achieves sub-milliwatt computation using in-memory techniques. With 1M neurons, it can implement convolutional network models like LeNet and ResNet, interfacing directly to sensors like DVS cameras for low-latency, always-on vision applications.
The DynapCNN is an ultra-low power event-driven neuromorphic processor chip designed for implementing spiking convolutional neural networks (SCNNs). It was developed by SynSense AG, a neuromorphic engineering startup based in Zurich, Switzerland.
Overview
The DynapCNN chip contains 1 million spiking integrate-and-fire neurons and is fully configurable for implementing different SCNN architectures. It utilizes in-memory computing techniques to perform sparse, event-driven neural network computations, enabling extremely low power consumption in the sub-milliwatt range.
It comprises 9 layers of convolution combined with pooling, which can be freely connected to each other. The layers can additionally be operated as dense fully connected layers for last layer classification.
The chip has a dedicated interface for connecting to dynamic vision sensors (DVS), such as the DAVIS sensor. This allows it to receive input spike streams directly without any pre-processing, reducing latency. It can support various types of convolutional network layers like convolutional, ReLU, pooling, etc., as well as popular network models like LeNet, ResNet, and Inception.
The DynapCNN has a digital architecture and integrates synthesizable digital logic, making it scalable across technology nodes. Multiple chips can be daisy-chained together to build deeper multi-chip networks.
Development
The DynapCNN was developed by SynSense AG, a neuromorphic AI startup based in Zurich, Switzerland. The chip’s architecture and circuits were designed to optimize performance, power, and area, specifically for ultra-low power SCNN inferencing, rather than for general-purpose computing.
The hardware interfaces with a software framework called SINABS (https://github.com/synsense/sinabs ) developed by SynSense for converting deep learning models from frameworks like Keras and PyTorch into equivalent SCNNs. It also integrates with the Samna middleware (https://pypi.org/project/samna/) , which handles interfacing the chip with sensors and visualization.
Applications
The ultra-low latency and power consumption of the DynapCNN make it suitable for embedded and edge applications like:
- Computer vision
- Robotics
- Internet-of-Things devices
- Autonomous vehicles
- Drones and other mobile platforms
A face recognition application for DVS cameras was demonstrated running on the DynapCNN at extremely low average power (<1mW). Such always-on vision applications are ideally suited for the event-driven capabilities of the chip.
Related publications
Date | Title | Authors | Venue/Source |
---|---|---|---|
June 2019 | Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC | Qian Liu, Ole Richter, Carsten Nielsen, Sadique Sheik, Giacomo Indiveri, Ning Qiao | CVPR 2019 |