C-DNN and C-Transformer: Mixing ANNs and SNNs for the Best of Both Worlds

Join us for a talk by Sangyeob Kim, Postdoctoral researcher at KAIST, on designing efficient accelerators that mix SNNs and ANNs.

Edit Page

Sangyeob and his team have developed a C-DNN processor that effectively processes object recognition workloads, achieving 51.3% higher energy efficiency compared to the previous state-of-the-art processor. Subsequently, they have applied C-DNN not only to image classification but also to other applications, and have developed the C-Transformer, which applies this technique to a Large Language Model (LLM). As a result, they demonstrate that the energy consumed in LLM can be reduced by 30% to 72% using the C-DNN technique, compared to the previous state-of-the-art processor. In this talk, we will introduce the processor developed for C-DNN and C-Transformer, and discuss how neuromorphic computing can be used in actual applications in the future.

Social share preview for C-DNN and C-Transformer: mixing ANNs and SNNs for the best of both worlds

Upcoming Workshops

No workshops are currently scheduled. Check back soon for new events!

Are you an expert in a neuromorphic topic? We invite you to share your knowledge with our community. Hosting a workshop is a great way to engage with peers and share your work.

C-DNN and C-Transformer: mixing ANNs and SNNs for the best of both worlds

About the Speaker

Sangyeob Kim (Student Member, IEEE) received the B.S., M.S. and Ph.D. degrees from the School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea, in 2018, 2020 and 2023, respectively. He is currently a Post-Doctoral Associate with the KAIST. His current research interests include energy-efficient system-on-chip design, especially focused on deep neural network accelerators, neuromorphic hardware, and computing-in-memory accelerators.

Inspired? Share your work.

Share your expertise with the community by speaking at a workshop, student talk, or hacking hour. It’s a great way to get feedback and help others learn.

Learn How to Present

Related Workshops

Building Neuromorphic Applications Using Talamo

Building Neuromorphic Applications Using Talamo

This offers a sneak-peek into Innatera’s technology stack allowing application development from scratch and deploying it on mixed-signal neuromorphic hardware.

Project Phasor - Kickoff

Project Phasor - Kickoff

Brian Anderson and others discuss the newly launched Project Phasor, aiming to organize efforts towards neuromorphic and NeuroAI virtualization and compilation.

Hands-On with Sinabs and Speck

Hands-On with Sinabs and Speck

Join Gregor Lenz for an engaging hands-on session featuring Sinabs and Speck. Explore the world of neuromorphic engineering and spike-based machine learning.