Speck is a fully event-driven neuromorphic vision SoC. Speck is able to support large-scale spiking convolutional neural network (sCNN) with a fully asynchronous chip architecture. Speck is fully configurable with the spiking neuron capacity of 328K. Furthermore, it integrates the state-of-art dynamic vision sensor (DVS) that enables fully event-driven based, real-time, highly integrated solution for varies dynamic visual scene. For classical applications, Speck can provide intelligence upon the scene at only mWs with a latency of 3.36us for a single event processed by a 9 layer network.
The Speck is an ultra-low power event-driven neuromorphic processor combined with an event-driven vision sensor on a single chip. It was developed by SynSense AG, a neuromorphic engineering startup based in Zurich, Switzerland.
Overview
The Speck chip contains 328K spiking integrate-and-fire neurons and is fully configurable for implementing different SCNN architectures. It utilizes in-memory computing techniques to perform sparse, event-driven neural network computations, enabling extremely low power consumption in the sub-milliwatt range.
It comprises 9 layers of convolution combined with pooling, that can be freely connected to each other. The layers can additionally be operated as dense fully connected layers for last layer classification. It has additional input preprocessing and decision readout blocks to make it out-of-the-box integratable in diverse environments and use cases.
The chip has a dynamic vision sensor (DVS). This allows it to receive input spike streams directly without ever leaving the chip, reducing latency. It can support various types of convolutional network layers like convolutional, ReLU, pooling, etc., as well as popular network models like LeNet, ResNet, and Inception.
Development
The Speck was developed by SynSense AG, a neuromorphic AI startup based in Zurich, Switzerland. The chip’s architecture and circuits were designed to optimize performance, power, and area specifically for ultra-low power SCNN inferencing rather than general-purpose computing.
The hardware interfaces with a software framework called SINABS (https://github.com/synsense/sinabs ) developed by SynSense for converting deep learning models from frameworks like Keras and PyTorch into equivalent SCNNs. It also integrates with the Samna (https://pypi.org/project/samna/ ) middleware which handles interfacing the chip to sensors and visualization.
Applications
The ultra-low latency and power consumption of the Speck make it suitable for embedded and edge applications like:
- Computer vision
- Robotics
- Internet-of-Things devices
- Autonomous vehicles
- Drones and other mobile platforms
A face recognition application for DVS cameras was demonstrated running on the Speck at extremely low average power (<1mW). Such always-on vision applications are ideally suited for the event-driven capabilities of the chip.
Related publications
Date | Title | Authors | Venue/Source |
---|---|---|---|
April 2022 | Speck: A Smart event-based Vision Sensor with a low latency 327K Neuron Convolutional Neuronal Network Processing Pipeline | Ole Richter, Yannan Xing, Michele De Marchi, Carsten Nielsen, Merkourios Katsimpris, Roberto Cattaneo, Yudi Ren, Qian Liu, Sadique Sheik, Tugba Demirci, Ning Qiao | arxiv (temporarily unavailable) |
June 2019 | Live Demonstration: Face Recognition on an Ultra-Low Power Event-Driven Convolutional Neural Network ASIC | Qian Liu, Ole Richter, Carsten Nielsen, Sadique Sheik, Giacomo Indiveri, Ning Qiao | CVPR 2019 |