Event
Special Seminar: "Using neuromorphic sparsity for perception and AI in" with Professor Tobi Delbruck
Tuesday, August 26, 2025
11:00 a.m.
Room 4105, Brendan Iribe Center
Brittney Brookins
brookins@umd.edu
Title: Using neuromorphic sparsity for perception and AI inference
Speaker: Professor Tobi Delbruck, Inst. of Neuroinformatics, UZH-ETH Zurich
Abstract: Activity-driven computation is key to brain power efficiency. The sparse, rapid output from neuromorphic event cameras enables faster vision systems that consume less power and operate effectively under challenging lighting conditions. I will demonstrate an event camera, then discuss how its sparse output inspired several generations of neural accelerators (developed within the Samsung Neuromorphic Processor global research project). These accelerators exploit various forms of dynamic sparsity to operate faster and more efficiently than conventional approaches, while retaining the compact area and high throughput of traditional neural accelerators. Spiking neural networks (SNNs) are popular in the neuromorphic community, but they are fundamentally incompatible with the requirement for abundant, fast, and cost-effective memory for states and weights. The convolutional and recurrent deep neural network (DNN) hardware accelerators I will present exploit spatial and temporal sparsity, similar to SNNs. However, they achieve state-of-the-art throughput, power efficiency, area efficiency, and low latency while utilizing DRAM for the large weight and state memory required by powerful DNNs. I will summarize how some of these concepts appear in the latest mass production Samsung application processor neural processing unit.
Abstract: Activity-driven computation is key to brain power efficiency. The sparse, rapid output from neuromorphic event cameras enables faster vision systems that consume less power and operate effectively under challenging lighting conditions. I will demonstrate an event camera, then discuss how its sparse output inspired several generations of neural accelerators (developed within the Samsung Neuromorphic Processor global research project). These accelerators exploit various forms of dynamic sparsity to operate faster and more efficiently than conventional approaches, while retaining the compact area and high throughput of traditional neural accelerators. Spiking neural networks (SNNs) are popular in the neuromorphic community, but they are fundamentally incompatible with the requirement for abundant, fast, and cost-effective memory for states and weights. The convolutional and recurrent deep neural network (DNN) hardware accelerators I will present exploit spatial and temporal sparsity, similar to SNNs. However, they achieve state-of-the-art throughput, power efficiency, area efficiency, and low latency while utilizing DRAM for the large weight and state memory required by powerful DNNs. I will summarize how some of these concepts appear in the latest mass production Samsung application processor neural processing unit.
Bio: (IEEE M'99–SM'06–F'13) received the B.Sc. degree in physics from the University of California in 1986 and PhD degree from Caltech in 1993 as the first student with the newly-established CNS program, with main PhD supervisor Carver Mead. He is an ETH Professor of Physics and Electrical Engineering, and has a position with the Institute of Neuroinformatics, University of Zurich and ETH Zurich, where he has been since 1998. The Sensors group that he co-directs together with Prof. Shih-Chii Liu works on a broad range of topics covering device physics to computer vision and control, with a theme of efficient neuromorphic sensory processing and deep neural network theory and hardware accelerators.