Let's not miss the multicore wave
The move to multicore ICs with dozens or hundreds of processor cores may be the biggest single design challenge of the next ten years, but is the design automation community ready? Presentations at the recent Design Automation Conference and elsewhere suggest there is much to do – and yet visible action is still lacking.
At a session on thousand-core chips at this year's DAC, Intel's Shekhar Borkar noted that ICs will have an integration capacity of 100 billion transistors by 2015. This will easily allow thousand-processor core chips, he said, with a potential for a near-linear performance speedup and a substantial power savings.
In the same session, IBM's John Darringer spoke about the design automation challenges posed by multicore ICs. Noting the requirement for innovation in system-level design, Darringer spoke about the need for three enabling technologies: physical architecture design, integrated early analysis, and multicore verification.
To facilitate chip integration, Darringer said, physical architecture must become more automated, borrowing techniques from "extended synthesis." Designers also need early analysis tools to determine which cores, accelerators, interconnect schemes, and memory hierarchies to employ in a multicore system-on-chip (SoC). These tools need better links to physical design. And system verification gets more complex when you bring in multiple cores, asynchronous links, and memory and network contention. Multicore verification requires a high level of specification and a strong reuse environment.
What I found most interesting about Darringer's talk is that I haven't heard established EDA vendors say much about these challenges. You'd think it would be an opportunity for some retooling, or at least an impetus for some new development in areas like electronic system level (ESL) design or verification. Meanwhile, there is work going on in defining new interconnect schemes like network-on-chip, but it's not being done by traditional EDA or silicon IP companies. Rather, it's being undertaken by a new breed of companies, such as Sonics, Arteris, and Silistix, whose business models seem to include elements of both EDA and IP.
Most observers agree, however, that the programming environment is the biggest challenge with multicore SoCs. Multicore architectures involve multiprocessing, and parallel programming is needed to take advantage of that. Most embedded programmers don't have such expertise. Legacy code is sequential, and most compilers can't extract parallelism.
The programming challenge is tough enough with homogeneous multicore ICs that use symmetric multiprocessing (SMP), and harder yet with heterogeneous multicore ICs with different types of processor cores. Multicore debugging is another software nightmare, because interactions between cores can cause data races, memory corruptions, and stalls. Different types of cores typically come with their own debugging environments.
At the Multicore Expo in March and Embedded Systems Conference in April, many providers in the embedded software development market stepped forward with software development platforms, analysis tools, and debugging capabilities for multicore ICs. These solutions, however, are aimed at the multicore platforms of today – 2, 4, 8, 16 cores – not the 100 or 1,000 core platforms of tomorrow. Many are aimed at SMP platforms. And many are architecture specific.
Software support for multicore platforms requires hardware knowledge, and that knowledge is probably beyond the traditional embedded software development tool providers. EDA companies know hardware, but do not seem interested in getting into the software development business. As Aart de Geus, Synopsys CEO, said in a video interview that I did for EE Times at DAC, C compilers and debuggers are "incredibly good and incredibly free today, so this is not a good business to enter. No argument there! But a concurrent C compiler that could automatically extract parallelism and then optimize the mapping of applications onto multicore hardware might just be incredibly profitable, especially if it could be adapted to a wide variety of architectures.
So who's going to provide the programming and debugging environment for multicore ICs? Right now, one answer is the developer of the hardware architecture. When startup Tilera Corp. introduced its Tile64 embedded processor in late August, a key part of the 64-core offering was an Eclipse-based C language development environment. Third parties are also playing a role. An example is RapidMind, which offers a development platform for the IBM Cell Broadband engine.
Meanwhile, perhaps we can build some bridges between the hardware and software design communities. I found it encouraging that DAC 2007 offered the session on thousand-core chips, along with a "Corezilla" panel discussion on multicore design challenges chaired by Markus Levy, president of the Multicore Association (www.multicore-association.com). I would encourage members of the EDA community to become familiar with the Multicore Association, which is doing some good work to develop both a communications and a debug API for multicore ICs.
But I still wonder why EDA providers aren't saying more about multicore processor ICs - and the software that will run on them - if, indeed, they're the wave of the future. I'm reminded of a comment made by Kurt Keutzer, professor of electrical engineering and computer science at U.C. Berkeley, at the "Megatrends and EDA 2017" panel at DAC. In his discussion about "manycore multiprocessors" and programmable platforms, Keutzer expressed his concern that "the next great EDA company won't be an EDA company at all."