BrainScaleS 2 - Universität Heidelberg

Learn about Universität Heidelberg's neuromorphic hardware: BrainScaleS 2

BrainScaleS 2 At A Glance

Release Year: 2022
Status: Released
Chip Type: Mixed-signal
Software: hxtorch
Applications: Edge processing, robotics
Neurons: 512
Synapses: 130000
On-Chip Learning: true
Power: ~1 W

The BrainScaleS-2 is an accelerated spiking neuromorphic system-on-chip integrating 512 adaptive integrate-and-fire neurons, 212k plastic synapses, embedded processors, and event routing. It enables fast emulation of complex neural dynamics and exploration of synaptic plasticity rules. The architecture supports training of deep spiking and non-spiking neural networks using hybrid techniques like surrogate gradients.

Developed By:

The BrainScaleS-2 accelerated neuromorphic system is an integrated circuit architecture for emulating biologically-inspired spiking neural networks. It was developed by researchers at the Heidelberg University and collaborators. Key features of the BrainScaleS-2 system include:

System Architecture

  • Single-chip ASIC integrating a custom analog core with 512 neuron circuits, 212k plastic synapses, analog parameter storage, embedded processors for digital control and plasticity, and an event routing network
  • Processor cores run a software stack with a C++ compiler and support hybrid spiking and non-spiking neural network execution
  • Capable as a unit of scale for larger multi-chip or wafer-scale systems

Neural and Synapse Circuits

  • Implements the Adaptive Exponential Integrate-and-Fire (AdEx) neuron model with individually configurable model parameters
  • Supports advanced neuron features like multi-compartments and structured neurons
  • On-chip synapse correlation and plasticity measurement enable programmable spike-timing dependent plasticity

Hybrid Plasticity Processing

  • Digital control processors allow flexible implementation of plasticity rules bridging multiple timescales
  • Massively parallel readout of analog observables enables gradient-based and surrogate gradient optimization approaches

Applications and Experiments

  • Accelerated emulation of complex spiking neuron dynamics, multi-compartment models, and path integration circuits
  • Exploration of synaptic plasticity models and critical network dynamics at biological timescales
  • Training of deep spiking neural networks using surrogate gradient techniques
  • Non-spiking neural network execution leveraging synaptic crossbar for analog matrix multiplication

The accelerated operation and flexible architecture facilitate applications in computational neuroscience research and novel machine learning approaches. The system design serves as a scalable basis for future large-scale neuromorphic computing platforms.

DateTitleAuthorsVenue/Source
January 2022The BrainScaleS-2 accelerated neuromorphic system with hybrid plasticityChristian Pehle, Sebastian Billaudelle, Benjamin Cramer, Jakob Kaiser, Korbinian Schreiber, Yannik Stradmann, Johannes Weis, Aron Leibfried, Eric Müller, Johannes SchemmelarXiv
Spiking Neural Network (SNN) Library Benchmarks

Spiking Neural Network (SNN) Library Benchmarks

  • Gregor Lenz, Kade Heckel, Sumit Bam Shrestha, Cameron Barker, Jens Egholm Pedersen

Discover the fastest Spiking Neural Network (SNN) frameworks for deep learning-based optimization. Performance, flexibility, and more analyzed in-depth

Spiking Neurons: A Digital Hardware Implementation

Spiking Neurons: A Digital Hardware Implementation

  • Fabrizio Ottati

Learn how to model Leaky Integrate and Fire (LIF) neurons in digital hardware. Understand spike communication, synapse integration, and more for hardware implementation.

Efficient Compression for Event-Based Data in Neuromorphic Applications

Efficient Compression for Event-Based Data in Neuromorphic Applications

  • Gregor Lenz, Fabrizio Ottati, Alexandre Marcireau

Discover methods to efficiently encode and store event-based data from high-resolution event cameras, striking a balance between file size and fast retrieval for spiking neural network training.