Abstract:
Heterogeneous Multi-Processor Systems on Chips (HMPSoCs) combine multiple processors – CPUs, GPUs, and NPUs – on a single chip (die). HMPSoCs enable cloud-free Edge AI inference on mobile (embedded) devices. Edge AI inference is private, reliable, and efficient. However, a single-processor inference may not be sufficient to meet the extra-functional user expectations, such as performance, accuracy, and battery life. A multi-processor inference that engages multiple processors in a single inference can, in theory, provide superior extra-functional inference efficiency. However, a multi-processor inference on HMPSoCs also induces overheads that counteract any potential gains, unless carefully managed. In this talk, I elaborate on the advances in computer systems (in the domain of EDA) made by my research group to enable low-latency, low-power multi-processor inference at the edge.
Bio:
Anuj Pathania is an Assistant Professor in the Parallel Computing Systems (PCS) group at the University of Amsterdam (UvA). His research focuses on the design of sustainable systems deployed in power-, thermal-, energy-, and reliability-constrained environments. He is an Alumnus of the Chair of Embedded Systems (CES) and graduated from Karlsruhe Institute of Technology (KIT) with a Ph.D. under Prof. Jörg Henkel in 2018.