Approximate Computing for Efficient Machine Learning

Nowadays, energy efficiency is a first-class design constraint in the ICT sector. Approximate computing emerges as a new design paradigm for generating energy efficient computing systems. There is a large body of resource-hungry applications (e.g., image processing and machine learning) that exhibit an intrinsic resilience to errors and produce outputs that are useful and of acceptable quality for the users despite their underlying computations being performed in an approximate manner. By exploiting this inherent error tolerance of such applications, approximate computing trades computational accuracy for savings in other metrics, e.g., energy consumption and performance. Machine learning, a very common and top trending workload of both data centers and embedded systems, is a perfect candidate for approximate computing application since,  by definition, it delivers approximate results. Performance as well as energy efficiency (especially in the case of embedded systems) are crucial for machine learning applications and thus, approximate computing techniques are widely adopted in machine learning (e.g., TPU) to improve its energy profile as well as performance.