PEPITA - A Forward-Forward Alternative to Backpropagation

Explore PEPITA, a forward-forward approach as an alternative to backpropagation, presented by Giorgia Dellaferrera. Learn about its advantages and implementation with PyTorch.

Social share preview for PEPITA - A Forward-Forward Alternative to Backpropagation

Upcoming Workshops

No workshops are currently scheduled. Check back soon for new events!

Are you an expert in a neuromorphic topic? We invite you to share your knowledge with our community. Hosting a workshop is a great way to engage with peers and share your work.

About the Speaker

Giorgia Dellaferrera has completed her PhD in computational neuroscience at the Institute of Neuroinformatics (ETH Zurich and the University of Zurich) and IBM Research Zurich with Prof. Indiveri, Prof. Eleftheriou and Dr. Pantazi. Her doctoral thesis focused on the interplay between neuroscience and artificial intelligence, with an emphasis on learning mechanisms in brains and machines. During her PhD, she visited the lab of Prof. Kreiman at the Harvard Medical School (US), where she developed a biologically inspired training strategy for artificial neural networks. Before her PhD, Giorgia obtained a master in Applied Physics at the Swiss Federal Institute of Technology Lausanne (EPFL) and worked as an intern at the Okinawa Institute of Science and Technology, Logitech, Imperial College London, and EPFL.

Inspired? Share your work.

Share your expertise with the community by speaking at a workshop, student talk, or hacking hour. It’s a great way to get feedback and help others learn.

Related Workshops

Evolutionary Optimization for Neuromorphic Systems

Evolutionary Optimization for Neuromorphic Systems

Dive into evolutionary optimization techniques for neuromorphic systems with Catherine Schuman, an expert in the field. Watch the recorded workshop for valuable insights.

C-DNN and C-Transformer: mixing ANNs and SNNs for the best of both worlds

C-DNN and C-Transformer: mixing ANNs and SNNs for the best of both worlds

Join us for a talk by Sangyeob Kim, Postdoctoral researcher at KAIST, on designing efficient accelerators that mix SNNs and ANNs.

Does the Brain do Gradient Descent?

Does the Brain do Gradient Descent?

Explore the brain's potential use of gradient descent in learning processes with Konrad Kording in this engaging recorded session.