Xlera8

Toward Domain-Specific EDA

More companies appear to be creating custom EDA tools, but it is not clear if this trend is accelerating and what it means for the mainstream EDA industry.

Whenever there is change, there is opportunity. Change can come from new abstractions, new options for optimization, or new limitations that are imposed on a tool or flow. For example, the slowing of Mooreโ€™s Law means that sufficient progress in performance, power, or cost cannot be made between a particular version of a product simply by moving to the next node. The design itself must be improved, margins shrunk, or the product re-architected.

One such change that is starting to find its way into the design methodology is shift from static tools to dynamic ones. A static tool will look at the design and optimize it independent of any particular use case or scenario. Dynamic optimization adds one or more scenarios that are used as input to the optimization process, allowing the tools to perform more focused optimizations. This started with power optimization when performing clock or power gating, which used to be a static operation. These techniques can be improved further by knowing exactly how and when parts of the design need to be active. This also is driving the resurgence of processor design where custom processors can be created that are optimal for specific tasks.

Semiconductor companies always have created some of their own EDA tools. โ€œIn the โ€™80s most semiconductor and ASIC companies had their own tools,โ€ says Simon Davidmann, founder and CEO of Imperas Software. โ€œBut then there were resource issues and customers wanted a more standardized approach. The industry transformed from proprietary solutions in the design and semiconductor companies to being an industry driven by standards, trying to build a common solution that was applicable to everybody.โ€

There was still room for some specialized tooling. โ€œEvery design house has some design or data management problem that is exclusive to them,โ€ says Rob Aitken, technology strategist at Synopsys. โ€œSometimes, after they create a solution, they donโ€™t want their competitors to get that, and so they keep it in house. They may have come to the conclusion that this is the only way to solve it, and it may be for any number of reasons, but eventually a more broadly applicable EDA solution could work for them.โ€

Tools are constantly in a state of flux. โ€œThe EDA business has to have a big enough market for it to justify investments in the tools,โ€ says Neil Hand, director of strategy for design verification technology at Siemens EDA. โ€œThe thing that really limits that, when it comes to industry-specific, or application- or domain-specific solutions, is how generalized the problem becomes. And then the second part is what languages or capabilities exist to encapsulate that generalization.โ€

Some domains are large enough to support dedicated solutions. โ€œDomain-specific is not something new,โ€ says Tom Feist, an embedded entrepreneur and contractor for openROAD. โ€œThe FPGA industry is an example where EDA and academia have attacked this challenge with solutions that include MATLAB, OpenCL, C/C++, Python, and Simulink-based design. National Instruments with LabVIEW is another example.โ€

There is always a balance between specificity and flexibility. โ€œDomain-specific systems run into an interesting overlap of technical and economic problems,โ€ says Duaine Pryor, EDA technology consultant. โ€œAs you make them general enough to garner a market that justifies a leading-edge development, they lose the value that comes from technical advantage gained through specialization. Of course the reverse is true, as well. That propagates through the whole value chain.โ€

Markets and industry dynamics change. โ€œThere are some companies with a lot of resources at the bleeding edge of their domains, trying to find ways to go further than where the EDA companies can take them,โ€ says Imperasโ€™ Davidmann. โ€œThatโ€™s why some companies are being acquired into semiconductor companies, where they chew up and spit bits out as a way to get that expertise in house. Iโ€™m sure that Appleโ€™s success with its M1 and M2 is because they have so much tooling inside.โ€

Anyone who uses the latest nodes knows the pressures that are on them. โ€œWith semiconductor scaling slowing, or failing, there is a need for architectural innovation and domain-specific optimization,โ€ says Zdenฤ›k Pล™ikryl, CTO for Codasip. โ€œRaising the abstraction level and efficient design automation enables faster design cycles, and hence time to market.โ€

Also, many new technologies are being inserted into design flows. โ€œAnytime you start talking about new technologies, photonics, for example, you may find a gap between whatโ€™s commercially available and whatโ€™s needed,โ€ says Jeff Roane, product manager at Cadence. โ€œBut that gap is quickly closed, as soon as the need arises, to the point where it makes financial sense for one of the big players to develop something.โ€

It takes time to build up of the necessary expertise. โ€œThe quantum EDA field will have to cross barriers between physics and engineering,โ€ says Mohamed Hassan, quantum solutions planning lead for Keysight Technologies. โ€œThis is a daunting task. The two fields typically use different terminologies and nomenclatures. Currently, the quantum hardware design cycle spans multiple tools, in multiple domains, in a discord fashion with multiple gaps in between that are typically filled by extra effort that is highly dependent on the knowledge and experience of the designer.โ€

The ESL failure
The electronic system level effort from the late โ€™90s was an attempt to introduce a new abstraction along with new languages. โ€œIt started out with broad goals and wound up being fairly narrowly targeted to datapath-centric and similar kinds of algorithmically straightforward design,โ€ says Synopsysโ€™ Aitken.

The market does continue to grow and evolve for some of the tools developed as part of that flow. โ€œThe system level co-processor hardware/software codesign and optimization does start to look more like a real disruption, but it has a real โ€˜back to the futureโ€™ flavor,โ€ says Pryor. โ€œThe industry initially hit the problem when many systems โ€” cell phones, in particular โ€” acquired more of a heterogeneous computing architecture. Some good solutions were generated, but became niche products because of a combination of economic factors and engineering silos. Design by optimization, high-level synthesis, domain-specific languages, and other developments in the last 20 years could make this area more tractable than at the millennium.โ€

ESL also was deflected by the growing IP market. โ€œToday we see this notion of tools plus IP,โ€ says Cadenceโ€™s Roane. โ€œYou see processor IP, memory IP, interconnect IP, interfaces IP, even the algorithm stuff thatโ€™s covered by high-level synthesis today. But if you look at the kinds of designs that are really amenable to high-level synthesis, itโ€™s algorithmic designs. The whole notion of tools-plus-IP is something thatโ€™s already in play today, and you are going to see more of that.โ€

Virtual prototypes hold many pieces of it together. โ€œDomain-specific EDA may help generate parts of the virtual prototype, such as processors or other components used in the SoC,โ€ says Codasipโ€™s Pล™ikryl. So, in one aspect, domain-specific EDA is enabled by the virtual prototype where each of the verticals is significantly accelerated and optimized by dedicated flows suited for those functions. If I make the parallel to the software world, we can have code written in several languages and glue everything together in the linker. Itโ€™s similar in the hardware world. We just use different integration methods.โ€

As abstractions get raised, the workloads become increasingly important. โ€œYears ago, you could optimize power in the layout, and thatโ€™s all people really could do,โ€ says Siemensโ€™ Hand. โ€œAnd then power became part of synthesis and the implementation tradeoff. Then it became part of the high-level synthesis tradeoff. Now itโ€™s become part of the processor optimization tradeoff, and weโ€™re going to go up and it will become part of the system-level tradeoffs.โ€

Those workloads are driving design practices. โ€œThe hyper-scalers are doing chip design because their specific workloads are unique and different from the targeted workloads that their suppliers are trying to target,โ€ adds Roane. โ€œYou can do those tasks with off-the-shelf processors, but that comes at a high cost in terms of power consumption. And youโ€™re probably not going to get the best performance compared to a custom implementation. We see a lot of hyper-scalers doing chip designs today because theyโ€™re trying to lower power and improve the performance for specific workloads that are unique to them.โ€

Machine learning also is creating some unique flows. โ€œWe are seeing many domain-specific architecture language being created,โ€ says Aitken. โ€œWhen you think about it from an EDA standpoint, itโ€™s definitely an opportunity for some customized design approach, starting from the language you use to describe these things. How does a synthesis flow thatโ€™s optimized to a particular structure differ from a synthesis flow as it exists now? How do you tailor an algorithm thatโ€™s going to produce a customized block?โ€

Tool development
In the past, a lot of the domain-specific tooling came from startups. โ€œThey would see an opportunity where customers were demanding something that was not being fulfilled by EDA,โ€ says Davidmann. โ€œWe pivoted from being a simulation company through to verification because of the demand created by RISC-V and the need for an ecosystem for the verification of processors. There are a handful of companies building solutions because the customers need it, but Big EDA hasnโ€™t gotten there yet. Small companies are creating this, and over time thereโ€™ll be consolidation.โ€

This is also driving interest in open-source EDA. โ€œOne of the compelling reasons to use open source has been the ability to modify the tool for their special needs,โ€ says openROADโ€™s Feist. โ€œThis could be for security or leveraging features like machine learning. Google has been a big proponent of open source, and it is not because the tools are too expensive for them. It is because they want a competitive advantage, and if they give their secret sauce to the EDA vendors, then everyone has it.โ€

One such open-source flow, shown in figure 1, has been put together by efabless.

Fig. 1: OpenLANE flow built on OpenROAD. Source: efabless

Fig. 1: OpenLANE flow built on OpenROAD. Source: efabless

Some large EDA companies are buying into this trend. โ€œOpen standards allow people to plug into flows,โ€ says Hand. โ€œThe ability to add interfaces into tools is important, and academic collaborations are important. Traditionally, thatโ€™s one of the areas where EDA does need to improve. There have been cases in the past where there were very tight collaborations between academia and EDA. In more recent times that has gone away, and we need to get back to it.โ€

One driver for that may be access to data. โ€œThe hyper-scalers spend enormous amounts of time gathering data, processing data, and keeping each other from having access to theirs,โ€ says Aitken. โ€œIn terms of chip data, consider on-die monitors. You can use these to gather information while the chip is running, and you can learn things. Big EDA is not giving you data. They are giving you a means to gather your own data and do whatever it is that you want with it. Thereโ€™s a further ML-style role, where the relevant data exists both within Synopsys and within the user base. For example, when a tool or flow has a bunch of knobs, what happens when you tune them in different ways? Where do you get the best answer?โ€

Hand agrees. โ€œWe work with customers and have added interfaces into tools that allow them to extract information and put it into their data lake. Then they can do their own deep analysis with information about their design, and they are building their own capabilities. That may be unique to their needs, because they are taking advantage of the fact that they can apply additional information about the design. We are not privy to that information.โ€

The creation of tools will often require multiple people to come together. โ€œQuantum EDA is envisioned as the software and tools that will streamline the workflow and enable the automation of quantum hardware design, whether it is based on superconducting qubits, trapped ions, spin qubits, integrated optics, or cold atoms,โ€ says Keysightโ€™s Hassan. โ€œThe hardware basis spans broad areas of development, from superconducting microwave circuits to optics and integrated photonics, which widens the quantum EDA opportunities but makes it challenging for a focused effort. A steep knowledge barrier keeps many engineers out of this hot emerging field, and it is very different from how the current mature EDA design cycle is devised, for instance, for designing integrated circuits.โ€

In other cases, an application domain places new demands onto exist tools and lows. โ€œAutonomous vehicles, whether they be robots, cars, or planes, bring in a whole new set of requirements,โ€ says Hand. โ€œIt adds new functional safety aspects, or a new focus on non-determinism that has to be managed throughout the flow.โ€

Simple changes can have big implications. โ€œIf you look at multi-die systems, where you start incorporating things beyond regular CMOS โ€” whether theyโ€™re novel memories or whether they are CMOS from different processes โ€” you run into a problem,โ€ says Aitken. โ€œYou can coerce an existing set of EDA tools to work with that, and you can coerce an existing set of assumptions about how margins should work and how signoff should work. But when you get to the point of wanting to do better than that, then you really ought to rethink some of the flow in terms of how you build what amounts to domain-specific EDA for the domain of signals, power, clocks, etc., migrating across a multi-die system, inside a package. Thatโ€™s a different animal than existing EDA solutions were developed for.โ€

This is par for the course with EDA. โ€œWith every new generation of product, whether itโ€™s being used in new nodes, or being used in new applications, EDA gets extended and creates new opportunities,โ€ says Hand. โ€œThe EDA industry today looks nothing like it did, in terms of its feature coverage. Itโ€™s no longer just a simulator and a synthesis tool and a layout tool. Itโ€™s gone well beyond that. We add a few more things on the bottom side and we add a few more on the top side, but it creates new opportunities for optimization, by using more information that is available to us.โ€

It has always been a combination of push and pull. โ€œThere always have been two dynamics,โ€ says Roane. โ€œOne is where the EDA companies will try to predict and therefore push. The other dynamic is where their customers, the semiconductor companies, will create pull in demand based on what theyโ€™re doing. In a perfect world, both of those forces would be aligned. And to the extent they are, that spells success for that new tool or technology. But theyโ€™re often not aligned. Sometimes you wait for that perfect storm to occur.โ€

Conclusion
It is possible that more in-house EDA tools are being created today just because the industry is going in so many new directions. The slowing of Mooreโ€™s Law is causing companies to look at many new technologies, solutions, and optimizations, and it takes time for those needs to coalesce into something that can be covered by a standard flow. The industry is vibrant, and this is just one indicator of growth.

Chat with us

Hi there! How can I help you?