Machine learning represents a potent new computing model and is quickly spreading through almost every class of system - cloud, PC, mobile and IoT. But machine learning and deep neural networks have earned a reputation for huge computation loads, large memory footprint and high power. Can the industry find a way to adapt machine learning to the most power and cost sensitive applications?
This session explores the emerging concept of “Tiny ML” - hardware platforms, network structures, optimization methods and end applications that combine sophisticated learned inference models with minuscule power budgets. These systems, often aimed at power budgets as low a 1mW, carry great promise for smarter sensor swarms, autonomous systems with years of battery life and ubiquitous devices that add subtle intelligence to everyday interactions.
|7.1||Ultra-Low-Power Command Recognition for Ubiquitous Devices|
|Speaker:||Chris Rowen - Babblelabs, Inc, Campbell, CA
|Author:||Chris Rowen - Babblelabs, Inc, Campbell, CA
|7.2||Using Analog Computation in Flash Memory for Energy-efficient AI Inference|
|Speaker:||Manar El-Chammas - Mythic, Redwood City, CA
|Author:||Manar El-Chammas - Mythic, Redwood City, CA
|7.3||Microwatts, Kilobytes, Megahertz, and Cents: Solving Real World Problems for AI at the Mobile Edge|
|Speaker:||Scott Hanson - Ambiq Micro, Austin, TX
|Author:||Scott Hanson - Ambiq Micro, Austin, TX