Machine Learning algorithms can benefit greatly from processors designed to efficiently perform the massive amounts of calculations needed in training and inference. A critical design consideration of these processors is having memory that provides the bandwidth necessary to support fast compute. Come learn about HBM, the memory technology behind the latest generations of GPUs, ASICs, and FPGAs that are accelerating your AI workloads.
HBM is implemented as part of a SIP design. Leveraging more than 3 years of HBM manufacturing experience, Samsung will cover all of the essential considerations for implementing HBM through design, verification and test.