Efficient and Robust Hardware for Neural Networks

  • Speaker:
    Prof. Grace Li

    TU-Darmstadt

  • Location:

    CES Seminar room

  • Date: Jul. 29th, 2025, 11:00 am

Abstract:
The last decade has witnessed significant breakthroughs of deep neural networks (DNNs) in many fields. These breakthroughs have been achieved at extremely high computation and memory cost. Accordingly, the increasing complexity of DNNs has led to a quest for efficient hardware platforms. In this talk, class-aware pruning is first presented to reduce the number of multiply-and-accumulate (MAC) operations in DNNs. Class-exclusion early-exit is then examined to reveal the target class before the last layer is reached. To accelerate DNNs, digital accelerators such as systolic array can be used. Such an accelerator is composed of an array of processing elements to efficiently execute MAC operations in parallel. However, such accelerators suffer from high energy consumption. To reduce energy consumption of MAC operations, we select quantized weight values with good power and timing characteristics and examine the encoding of MAC units. To reduce energy consumption incurred by data movement, logic design of neural networks is presented. Furthermore, the robustness of in-memory-computing with RRAM crossbars under variations and noise will be discussed. In the end, on-going research topics and future research plan will be summarized.