This tutorial covers the basics of machine learning, systems and infrastructure considerations for performing machine learning at scale, specialized hardware architectures for neural networks, and approaches for using machine learning for building the next generation of EDA tools. The tutorial starts with Naïve Bayes, Support Vector Machines, and Decision Trees, followed by blackbox classifier training with gradient descent. With examples, the tutorial illustrates feature selection, model validation and how to avoid overfitting machine learning models. Dimensionality reduction techniques become important for data with high dimensionality for reducing computational and storage requirements. We discuss singular value decomposition (SVD), and principal component analysis (PCA) techniques for dimensionality reduction.
Next, the tutorial discusses k-means clustering for unsupervised learning, and efficient parallel algorithms for solving this problem for large datasets. The tutorial proceeds on to deep network training and simple convolutional neural networks. It covers common neural net architectures, including ResNet, and Recurrent Neural Networks (RNNs), that are commonly used for many pattern recognition tasks. We then covers topics related to performing machine learning at scale on large data sets, starting with the performance and throughput limitations of traditional compute and storage systems, and software frameworks that help solve these problems.
We discuss one such framework, Apache Sparc, and MLlib, a distributed machine learning library, which simplifies creation of large scale parallel machine learning pipelines. Hardware assisted-speedups are becoming increasingly common for machine learning. We discuss how some of the algorithms take advantage of GPUs to deliver an order of magnitude speedup. We also discuss emerging trends of purpose built processors and hardware for acceleration of deep learning. In particular, we discuss new approaches for efficiently representing and computing deep neural networks, with compression, weight sharing and other optimizations that yield orders of magnitude of power efficiency and computational speedup over conventional CPU/GPU architectures.
Finally, the tutorial describes several ways in which machine learning can be applied to solve common optimization and classification problems encountered in the traditional CAD flows. We discuss several problems, including logic optimization, functional verification, and debug that can benefit from machine learning.
DAC is the premier conference devoted to the design and automation of electronic systems (EDA), embedded systems and software (ESS), and intellectual property (IP).
DAC 2017 will be held in Austin, Texas, at the Austin Convention Center. Get details about travel, hotels, and area attractions in one convenient spot.