Embedded Machine Learning Lab


IoT devices more and more rely on ML models to perform their operations. They thereby also generate lots of data that should be used to improve these ML models through on-device learning. Devices need to perform the training with this data locally due to privacy constraints or communication limitations. However, the inference of neural networks, and especially the training, requires too many resources (computations, memory, energy, etc.) — unless the available resources are considered in the design.


This lab provides insights into deploying machine learning algorithms to embedded devices.

Since embedded devices operate with significantly lower resources than the commonly-employed high-end GPUs, making neural networks run fast without sacrificing much accuracy on embedded devices is a challenging task. The lab covers training and inference on resource-constrained devices, introducing state-of-the-art methodologies like pruning and quantization.


The students will learn about neural networks beyond theory, working with popular frameworks like TensorFlow, the effects of hyperparameters, and how they influence the network. Furthermore, the student will learn about resource and accuracy trade-offs in neural networks and design custom networks to achieve given resource or accuracy requirements.


This lab requires basic (theoretic) knowledge about neural networks and training. Further knowledge of Linux environments and Python is strongly advised since they will be intensively used in the lab and are the de-facto industry standard for machine learning research.


The students will meet every week. Exact dates and times will be fixed in the first kick-off meeting. Depending on the number of participants, students will work together in groups of 2-3 students.

Language of instructionEnglish
Organisational issues

Please register in ILIAS to participate.