Ilias Katsardis - Google, Inc., London, United Kingdom
Whether in semiconductor design, life sciences or deep learning, building high-quality products and getting to market quickly often demands extensive computer simulation. Enterprises compete in-part based on the scale, performance and cost-efficiency of their high-performance computing (HPC) environments. Yet in the world of EDA, the stakes are even higher when it comes to device validation and regression testing on designs that require massively compute-intensive operations. That said, modern VLSI and SoC designs comprised of millions of gates, which can often include minor design changes, can result in millions of digital and analog simulations to be re-run to ensure that the device continues to function and has not regressed. Furthermore, with tape-out costs ranging from 10-15 million dollars, organizations cannot afford to make a mistake in device design and verification. Today’s EDA design teams are looking for new ways of doing business in order to achieve their goals at a faster pace. Yet it is becoming increasingly impossible to run a simulation on an existing on-premise cluster without spending weeks to complete projects. This is why cloud environments are becoming an increasingly compelling environment for EDA workloads. In addition, with advances in cloud offerings and more cloud-friendly licensing models from software vendors, the barriers to running EDA in the cloud are falling away.
This talk presented jointly by Google and Univa will discuss best practices for moving EDA workloads to the cloud. The presenters will begin by addressing the state of the cloud market, the opportunities and challenges faced with cloud computing within the unique realm of EDA, and then discuss solutions to ease an organization’s journey to the cloud. To demonstrate these latter points, a use case from a leading application-specific integrated circuit (ASIC) developer that designs and produces high-end chips and semiconductor IP, will be introduced. With an ever-increasing workload and demand for capacity, Google Cloud’s service-on-demand model and Univa Grid Engine were selected to help the company extend its workloads to the cloud. The presenters will also discuss how these solutions helped it manage its complex ASIC design workloads automatically, maximize shared resources and accelerate the execution of any container, application or service, all while increasing RIO and improving overall results.
Rob Lalonde Bio: Robert Lalonde is the vice president and general manager, cloud, at Univa. He brings over 25 years of executive management experience to lead Univa's accelerating growth and entry into new markets. Rob has held executive positions in multiple, successful high tech companies and startups. He possesses a unique and multi-disciplined set of skills having held positions in Sales, Marketing, Business Development, and CEO and board positions. Rob has completed MBA studies at York University's Schulich School of Business and holds a degree in computer science from Laurentian University.
Ilias Katsardis Bio: Ilias Katsardis is an HPC product specialist (EMEA) at Google. In this role, Ilias brings over 14 years of experience in the cloud computing and high-performance computing industries to promote Google Cloud’s state-of-the-art infrastructure for complex HPC workloads. Previously, he worked as an applications analyst at Cray Inc., where he was a dedicated analyst to the European Centre for Medium-Range Weather Forecasts (ECMWF), and, prior to that, was an HPC application specialist at ClusterVision. Ilias also founded Airwire Networks in 2006.