Machine learning prediction on embedded systems

Machine learning (ML) applications have shown remarkable performance in vanous intelligent tasks but high computational intensity and large memory requirements have hindered its widespread ubhzation in embedded and Internet of things devices due to resource constraints.
Many optimization techniques have been proposed previously for domain specific architectures. These optimizations will affect an embedded device differently. and each of them have their own trade-offs and Impact speed, accuracy and energy efficiency differently. Understanding these trade-offs will help a programmer to select from the arsenal of optimizations based on their target application. For example, a latency sensitive real-t1me apphcabon may tolerate inaccuracies, and another apphcat1on may need high accuracy. We want to explore optimization techmques needed to execute machine learning inference algorithms for embedded dev1ces and suggest the1r trade-offs on speed, energy, and accuracy.

Intern: 
Naveen Vedula
Faculty Supervisor: 
Arrvindh Shriraman
Province: 
British Columbia
Partner University: 
Discipline: 
Program: