The Predicament of the ASIC Designer—Part II

By Bikash Roy Choudhury, Principal Architect, AKA “TechApps InfraGuru”

More than a dozen EDA applications and workflows are alive in a production chip design environment. Identifying the workload and access pattern for each application is very challenging and time consuming.

The NetApp EDA core team has spent a lot of time and effort to identify the workloads and the access pattern of some of the most commonly used applications in EDA environments. The following diagram provides an overview of some of the common workloads generated from different workflows. The majority of the workflows have a similar workload signature though some are different.

hightech_360.png

Figure 1) Workloads of different workflows during the chip design process.

This diagram illustrates that both forms of verification generate the most intense workloads. Most ASIC designers spend 60–70% of their time in logical or physical verification. Various test benches and regressions are run during the verification phase to simulate different logical and physical designs. The intense nature of the workload generates a lot of pressure on network bandwidth, memory, CPU, and disk on the storage. The right choice of storage platform and sizing, architecting, and tuning the storage, network, and compute nodes have a significant impact on overall performance and job completion time during the chip design process.

Because system-on-chips (SoCs) are becoming more complex due to the ability to pack increasing number of transistors into ever smaller form factors, ASIC chip designers are spending a lot of time simulating the designs. The scratch area that contains transient data needs to be separated from the actual chip binaries and tools. This isolated scratch area can be part of a single project or it can be shared by different projects when various designs are under test and IPs are reused.

The diagram above suggests that verification is a superset for many other workloads in the workflow. All the other workloads from SCM, Place and Route, Static Timing Analysis, Standard Cell Library Characterization, Dynamic Rule Check, Layout Versus Schematics, and other tools are not as intense as verification, but they do have a similar access pattern. NetApp highly recommends keeping these workflows in separate logical volumes on the same or different controllers depending on the size of the chip design.

Isolating and spreading out the workloads in different logical volumes and aggregates across multiple controllers provides the following advantages.

  • Flexible and scalable storage provisioning for different levels of services offerings.
  • Debugging and fault isolation are quick and easy for specific workflows.
  • Assigning quality of service on different volumes provides better use of storage and network resources. For example, a scratch volume has higher priority for storage and network resources than workflow volumes that are present in the same storage controller.
  • Backing up the data and setting the right retention time for specific volumes are much easier.
  • Isolating disparate application workloads on different storage controllers and having them coexist with the rest of the workflows in a cluster architecture provides a heterogeneous environment. For example, software quality and mask prep/tape-out application workloads can be isolated on different storage controllers and be part of the same cluster namespace along with other workflows.
  • Best practices and tuning can be applied on different storage controllers in a cluster for heterogeneous workloads such as software quality and mask prep/tape-out applications.
  • A modular architecture also prepares application vendors to make tools that are more efficient and location aware.

Over the years, various EDA vendors have claimed to provide 5–10 times performance improvements with the latest versions of their applications. These applications are getting faster in order to accommodate faster time to market demands. However, an adequately designed and optimized infrastructure (storage, network, and compute) complements application enhancements to improve overall performance and enable ASIC designers to be more efficient and productive.

The current IT service model needs to move from being just an infrastructure provider to being an architectural design center that has the capability to transform into a service provider with a better delivery model for software and hardware developers and ASIC designers. The right infrastructure must be in place to provide better performance and significantly optimize the EDA license cost.

Engineers are increasingly driving the infrastructure requirements for their design and development efforts in the changing paradigm of the semiconductor industry. Shift in business model—to design a cost-sharing model of the existing and growing infrastructure from a company’s capital expenditure (capex)—is inevitable. Not only can the infrastructure be used by the resident engineers but it also can be leveraged by customers who reuse the IPs.

Software as a service, with a simple, flexible, and agile infrastructure, can generate an efficient delivery model that can not only optimize performance and provide security to the IPs but also reduce the data center footprint globally, reduce capex, and optimize EDA application license costs.

I will write about the importance of file system layout for EDA workloads in my next blog post.