Xylo - SynSense

Learn about SynSense's neuromorphic hardware: Xylo

Xylo At A Glance

Release Year: 2022
Status: Released
Chip Type: Digital
Software: Rockpool
Applications: Smart sensing
Neurons: 1000
Synapses: 278000
Weight bits: 8
Activation bits: 8
Power: ~5 mW

Xylo is a 28nm 1000 neuron digital spiking neural network inference chip optimized for ultra-low power edge deployment of trained SNNs, with a flexible architecture to map various network topologies.

Xylo is a digital spiking neural network (SNN) inference processor developed by SynSense AG. It is designed to efficiently simulate leaky integrate-and-fire (LIF) neurons to implement deep spiking neural networks for edge processing applications.

Xylo is a series of ultra-low-power devices for sensory inference, featuring a digital SNN core adaptable to various sensory inputs like audio and bio-signals. Its SNN core uses an integer-logic CuBa-LIF neuron model with customizable parameters for each synapse and neuron, supporting a wide range of network architectures. The Xylo Audio 2 model (SYNS61201) specifically includes 8-bit synaptic weights, 16-bit synaptic and membrane states, two synaptic states per neuron, 16 input channels, 1000 hidden neurons, 8 output neurons with 8 output channels, a maximum fan-in of 63, and a total of 64,000 synaptic weights. For more detailed technical information, see https://rockpool.ai/devices/xylo-overview.html . The Rockpool toolchain contains quantization methods designed for Xylo and bit-accurate simulations of Xylo devices.


Xylo is an application-specific integrated circuit (ASIC) chip optimized specifically for SNN inference. Key features include:

  • All-digital design using integer arithmetic for efficient simulation of LIF neuron dynamics
  • Supports up to 1000 LIF neurons with configurable synaptic and membrane time constants, thresholds, and biases for each neuron
  • 16 input channels and 8 output channels using asynchronous spiking events
  • Flexible network architecture including support for recurrent connectivity to map deep networks
  • Ultra-low power consumption, with 219 μW idle power and 93 μW dynamic inference power measured on audio classification application

The chip is fabricated in a 28nm CMOS process and occupies a 6.5 mm2 die area. It can operate at clock frequencies up to 250 MHz.


The core of Xylo consists of a bank of 1000 digital LIF neurons. Each neuron maintains 16-bit synaptic and membrane state variables to accumulate inputs and determine spike times. Exponential state decay is efficiently approximated using bit shift operations parameterized by time constants. Additional hardware includes dense input weights, sparse recurrent weights, and linear output weights to map arbitrary network topologies.

The input and output layers use asynchronous events to communicate spikes, avoiding the need to synchronize with an external clock. This event-based interface helps minimize total system power consumption.

Software Tools

Xylo leverages the Rockpool ecosystem for mapping and deploying SNNs. The Rockpool library and Python API abstract the SNN programming to high levels, enabling machine learning engineers to train networks easily using standard methods like backpropagation. A compiler handles mapping optimized networks onto the Xylo substrate.


The flexibility to implement generic deep network topologies makes Xylo suitable for a variety of edge deployments across domains such as audio, time series, and control. Example applications demonstrated include low-power keyword spotting, biosignal classification, and robotic control. Ultra-low idle and dynamic power consumption enables continuous background processing in power-constrained environments.

August 2022Sub-mW Neuromorphic SNN audio processing applications with Rockpool and XyloHannah Bos, Dylan MuirarXiv
NorthPole, IBM's latest Neuromorphic AI Hardware

NorthPole, IBM's latest Neuromorphic AI Hardware

  • Fabrizio Ottati

Translating the NorthPole paper from IBM to human language.

Spiking Neurons: A Digital Hardware Implementation

Spiking Neurons: A Digital Hardware Implementation

  • Fabrizio Ottati

Learn how to model Leaky Integrate and Fire (LIF) neurons in digital hardware. Understand spike communication, synapse integration, and more for hardware implementation.

Spiking Neural Network (SNN) Library Benchmarks

Spiking Neural Network (SNN) Library Benchmarks

  • Gregor Lenz, Kade Heckel, Sumit Bam Shrestha, Cameron Barker, Jens Egholm Pedersen

Discover the fastest Spiking Neural Network (SNN) frameworks for deep learning-based optimization. Performance, flexibility, and more analyzed in-depth