MONDAY July 20, 4:00pm - 5:30pm
Tutorial 7 Part 2: Toward New Era of Compute-in-Memory: From Memory Devices to Applications

Michael Niemier - Univ. of Notre Dame, Notre Dame, IN
Shimeng Yu - Georgia State Univ., Atlanta, GA
Onur Mutlu - ETH Zurich, Switzerland
Deliang Fan - Arizona State Univ., Tempe, AZ
Deliang Fan - Arizona State Univ., Tempe, AZ
Michael Niemier - Univ. of Notre Dame, Notre Dame, IN

Most computers today employ a Von-Neumann architecture with separate computing and memory units that are connected via buses. This results in a memory wall – and undesirable side effects such as long memory access latencies (relative to the speed of the processor), limited memory bandwidth, energetically expensive data transfer, etc. Using machine learning as a representative example, an integer-based multiply and accumulate operation may require ~3pJ of energy. However, fetching the weight value from an external DRAM may require 640 pJ of energy [0]. As such, there is growing interest in LiM/CiM architectures (hereafter referred to as CiM) to circumvent the memory wall. The key concept in CiM is to embed logic units within the memory that can process data and simultaneously exploit the large internal memory bandwidth to achieve high degrees of parallelization. CiM could lead to remarkable savings in both energy and latency associated with processor-memory communication/data transfer, and significantly improve overall system performance.

To date, both CMOS and post-CMOS memory technologies have been explored when considering SRAM-based CiM logic. For example, by adding one or two transistors to a conventional 6T SRAM memory cell, it is possible to perform Boolean logic in-memory [5]. Alternatively, in-DRAM logic typically leverages multi-row activation methods and charge sharing property between selected memory cells to perform bulk bit-wise logic operations [6]. Other emerging post-CMOS non-volatile memories – e.g., magnetic random access memory (MRAM), Resistive RAM (ReRAM) or memories based on ferroelectric field effect transistors (FeFETs) – are also being explored in the context of CiM architectures. Said devices could offer additional benefits that include (i) non-volatility, (ii) zero leakage power in un-accessed bit-cell, and (iii) high integration density. Recent work suggests that CiM platforms based on these devices could help to accelerate workloads in applications spaces that span from bioinformatics [1], one- and few-shot learning tasks [2], [3], and other machine learning applications [4].

In this proposed tutorial, four tutorials will be given to discuss how to design logic-in-memory (LiM)/compute-in-memory (CiM) in a bottom-up fashion. Each talk will discuss prospects and perspectives of CiM system in different layers, including the pros and cons of the underlying memory devices, CiM circuit designs, CiM architectures and applications thereof.

Michael Niemier bio: Niemier is currently an Associate Professor at the University of Notre Dame.  His research interests include designing, facilitating, benchmarking, and evaluating circuits and architectures based on emerging technologies. Currently, Niemier's research efforts are based on new transistor technologies, as well as devices based on alternative state variables such as spin.  He is the recipient of an IBM Faculty Award, the Rev. Edmund P. Joyce, C.S.C. Award for Excellence in Undergraduate Teaching at the University of Notre Dame, and best paper awards such as at ISLPED in 2018.  Niemier has served on numerous technical program committees for design related conferences (including DAC, DATE, ICCAD, etc.), and has chaired the emerging technologies track at DATE, DAC, and ICCAD.  He is an associate editor for IEEE Transactions on Nanotechnology, as well as the ACM Journal of Emerging Technologies in Computing.  He is a senior member of the IEEE.

Shimeng Yu bio: Yu is an associate professor of electrical and computer engineering at the Georgia Institute of Technology. He received the Ph.D. degree in electrical engineering from Stanford University in 2013. From 2013 to 2018, he was an assistant professor at Arizona State University. Prof. Yu’s research expertise is on the memory technologies (e.g., SRAM, RRAM, ferroelectrics) for applications such as deep learning accelerator, neuromorphic computing, monolithic 3D integration, and hardware security. Among Prof. Yu’s honors, he was a recipient of the NSF CAREER Award in 2016, the IEEE EDS Early Career Award in 2017, the ACM SIGDA Outstanding New Faculty Award in 2018, the SRC Young Faculty Award in 2019, etc. Prof. Yu has served on the technical program committee of top-tier conferences such as IEDM, VLSI, ISCAS, DAC, DATE, ICCAD, etc. He is a senior member of the IEEE.

Onur Mutlu bio: Mutlu is a Professor of Computer Science at ETH Zurich. He is also a faculty member at Carnegie Mellon University, where he previously held the Strecker Early Career Professorship.  His current broader research interests are in computer architecture, systems, hardware security, and bioinformatics. A variety of techniques he, along with his group and collaborators, has invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor.
 He started the Computer Architecture Group at Microsoft Research (2006-2009), and held various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google.  He received the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, ACM SIGARCH Maurice Wilkes Award, the inaugural IEEE Computer Society Young Computer Architect Award, the inaugural Intel Early Career Faculty Award, US National Science Foundation CAREER Award, Carnegie Mellon University Ladd Research Award, faculty partnership awards from various companies, and a healthy number of best paper or "Top Pick" paper recognitions at various computer systems, architecture, and hardware security venues. He is an ACM Fellow "for contributions to computer architecture research, especially in memory systems", IEEE Fellow for "contributions to computer architecture research and practice", and an elected member of the Academy of Europe (Academia Europaea). His computer architecture and digital logic design course lectures and materials are freely available on YouTube, and his research group makes a wide variety of software and hardware artifacts freely available online. For more information, please see his webpage at

Deliang Fan bio: Dr. Deliang Fan is currently an Assistant Professor in the School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, USA. Before joining ASU in 2019, he was an assistant professor in Department of Electrical and Computer Engineering at University of Central Florida, Orlando, FL, USA. He received his M.S. and Ph.D. degrees, under the supervision of Prof. Kaushik Roy, in Electrical and Computer Engineering from Purdue University, West Lafayette, IN, USA, in 2012 and 2015, respectively. Dr. Fan’s primary research interests include Energy Efficient and High Performance Big Data Processing-In-Memory Circuit, Architecture and Algorithm, with applications in Deep Neural Network, Data Encryption, Graph Processing and Bioinformatics Acceleration-in-Memory system; Hardware-aware deep learning optimization; Brain-inspired (Neuromorphic) Computing; AI security. He has authored and co-authored 100+ peer-reviewed international journal/conference papers in above area. He is the receipt of best paper award of 2019 ACM Great Lakes Symposium on VLSI, 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), and 2017 IEEE ISVLSI. His research paper was also nominated as best paper candidate of 2019 Asia and South Pacific Design Automation Conference. He served as technical reviewers for over 30 international journals/conferences, such as Nature Electronics, IEEE TNNLS, TVLSI, TCAD, TNANO, TC, TCAS, etc. He also served as the Technical Program Committee member of DAC, ICCAD, HPCA, MICRO, GLSVLSI, ISVLSI, ASP-DAC, etc. He is also the technical area chair of GLSVLSI 2019/2020, ISQED 2019/2020, and the financial chair of ISVLSI 2019. Please refer to for more details