WEDNESDAY July 22, 9:20am - 10:00am
KEYWORD: ARCHITECTURE & SYSTEM DESIGN, ANY
EVENT TYPE: KEYNOTE
A Massive Wafer-Scale Supercomputer for Deep Learning Acceleration: A Radically New Paradigm for Deep Learning Acceleration
Speaker:
Andrew Feldman - Cerebras Systems Inc., Los Altos, CA
Deep learning has emerged as one of the most important silicon workloads of our time. Its computational demands are massive and ever-increasing: the requirements to train the largest deep learning models increased by 300,000x between 2012-2018. Traditional processors are not well-suited to meet this demand, mainly due to the overhead of bringing massive amounts of weight and training data to and from the processor. This makes the deep learning problem a prime candidate for custom ASIC hardware and innovative EDA software. Our startup company Cerebras has developed a new super computer system optimized for this deep learning task. This system is powered by the largest monolithic chip ever built: the Cerebras Wafer-Scale Engine (WSE). This is a single integrated 46,225 mm^2 silicon chip with a whopping 1,200 Billion transistors. Its array of 400,000 compute cores make the chip 56x larger than today’s largest GPU, with 3,000x more on-chip memory and >10,000x memory bandwidth. The WSE delivers more compute, more memory, and more communication bandwidth to enable AI research at revolutionary speeds and scale. In this talk, we will first describe the general architecture of the systolic computer hardware, including the enclosure that can generate and absorb the over 15 kilowatts of energy. Next, we will dive into the technical complexities of the Cerebras compiler flow when mapping Tensorflow neural network computation graphs to the Cerebras WSE hardware   This mapping problem is both different and also strangely similar to a traditional ASIC and FPGA design flows.  In particular, we will highlight the unique technical challenges of the Cerebras place and route flow and compare/contrast the Cerebras compiler to other EDA tools in the ASIC and FPGA domains.



Biography:Andrew Feldman is co-founder and CEO of Cerebras Systems, a unicorn startup dedicated to accelerating Artificial intelligence (AI) compute. Cerebras is a team of pioneering computer architects, computer scientists, deep learning researchers, and engineers of all types who have come together to build a new class of computer optimized for AI work. Prior to Cerebras, Andrew was co-founder and CEO of SeaMicro the inventors of the microserver category and pioneers in energy efficient computation. SeaMicro was acquired by AMD for $357 million in 2012. Prior SeaMicro, Andrew was Vice President of Marketing and Product Management at Force10 Networks (acquired by Dell for $800 Million) and before that was Vice President of Corporate Marketing and Corporate Development for Riverstone Networks (NASDAQ: RSTN) from inception through IPO. Andrew is passionate about building teams that solve industry transforming problems. He is a sought-after advisor to startups, and currently serves on the board of directors at Natron Energy and on the advisory board of more than a dozen startups. Andrew is a frequent keynote speaker and guest lecturer at the Stanford Graduate School of Business. Andrew holds a bachelor’s degree and an MBA from Stanford University.