DynapCNN - SynSense

Learn about SynSense's neuromorphic hardware: DynapCNN

DynapCNN At A Glance

Release Year: 2019
Status: Released
Chip Type: Digital
Software: Sinabs, Samna
Applications: Smart vision processing
Neurons: 1M
Synapses: >50G CNN, ~2M fully-connected
Weight bits: 8bit ints
Activation bits: 1 bit spikes, 16bit neurons
Power: ~5 mW

The DynapCNN is an ultra-low power, event-driven neuromorphic processor chip for spiking neural networks that achieves sub-milliwatt computation using in-memory techniques. With 1M neurons, it can implement convolutional network models like LeNet and ResNet, interfacing directly to sensors like DVS cameras for low-latency, always-on vision applications.

The DynapCNN is an ultra-low power event-driven neuromorphic processor chip designed for implementing spiking convolutional neural networks (SCNNs). It was developed by SynSense AG, a neuromorphic engineering startup based in Zurich, Switzerland.

Overview

The DynapCNN chip contains 1 million spiking integrate-and-fire neurons and is fully configurable for implementing different SCNN architectures. It utilizes in-memory computing techniques to perform sparse, event-driven neural network computations, enabling extremely low power consumption in the sub-milliwatt range.

It comprises 9 layers of convolution combined with pooling, which can be freely connected to each other. The layers can additionally be operated as dense fully connected layers for last layer classification.

The chip has a dedicated interface for connecting to dynamic vision sensors (DVS), such as the DAVIS sensor. This allows it to receive input spike streams directly without any pre-processing, reducing latency. It can support various types of convolutional network layers like convolutional, ReLU, pooling, etc., as well as popular network models like LeNet, ResNet, and Inception.

The DynapCNN has a digital architecture and integrates synthesizable digital logic, making it scalable across technology nodes. Multiple chips can be daisy-chained together to build deeper multi-chip networks.

Development

The DynapCNN was developed by SynSense AG, a neuromorphic AI startup based in Zurich, Switzerland. The chip’s architecture and circuits were designed to optimize performance, power, and area, specifically for ultra-low power SCNN inferencing, rather than for general-purpose computing.

The hardware interfaces with a software framework called SINABS (https://github.com/synsense/sinabs ) developed by SynSense for converting deep learning models from frameworks like Keras and PyTorch into equivalent SCNNs. It also integrates with the Samna middleware (https://pypi.org/project/samna/) , which handles interfacing the chip with sensors and visualization.

Applications

The ultra-low latency and power consumption of the DynapCNN make it suitable for embedded and edge applications like:

  • Computer vision
  • Robotics
  • Internet-of-Things devices
  • Autonomous vehicles
  • Drones and other mobile platforms

A face recognition application for DVS cameras was demonstrated running on the DynapCNN at extremely low average power (<1mW). Such always-on vision applications are ideally suited for the event-driven capabilities of the chip.

DateTitleAuthorsVenue/Source
June 2019Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASICQian Liu, Ole Richter, Carsten Nielsen, Sadique Sheik, Giacomo Indiveri, Ning QiaoCVPR 2019
~ Share this Site ~
Spiking Neural Network (SNN) Library Benchmarks

Spiking Neural Network (SNN) Library Benchmarks

  • Gregor Lenz, Kade Heckel, Sumit Bam Shrestha, Cameron Barker, Jens Egholm Pedersen

Discover the fastest Spiking Neural Network (SNN) frameworks for deep learning-based optimization. Performance, flexibility, and more analyzed in-depth

NorthPole, IBM's latest Neuromorphic AI Hardware

NorthPole, IBM's latest Neuromorphic AI Hardware

  • Fabrizio Ottati

Translating the NorthPole paper from IBM to human language.

Spiking Neurons: A Digital Hardware Implementation

Spiking Neurons: A Digital Hardware Implementation

  • Fabrizio Ottati

Learn how to model Leaky Integrate and Fire (LIF) neurons in digital hardware. Understand spike communication, synapse integration, and more for hardware implementation.