We all agree that verification and debug take up a significant amount of time and are arguably the most challenging parts of chip development. Simulator performance has consistently topped the charts and is a critical component in the verification process. Still, the need of the hour is to stretch beyond simulator speed to achieve maximum verification throughput and efficiency.
Artificial intelligence (AI) is everywhere. Machine learning (ML) and its associated inference abilities promise to revolutionize everything from driving your car to making your breakfast. While machine learning isn’t a panacea, bringing intelligence into the verification process can increase verification efficiency significantly.
Simulation accounts for roughly 70% of all bugs found in a design. Let’s talk about the top challenges that each of the design and verification (DV) engineers are facing today:
- The need to run frequent regressions anytime there is any RTL or code change. This step is time-consuming if regression has millions of cycles.
- The time to reach coverage closure.
- Lack of knowledge/control of input stimulus that impacts specific functional coverage.
- Difficulty finding bugs in more remote scenarios.
- Debug/triage failures.
Bringing intelligence into the regression space can increase verification efficiency by examining the regression and identifying the relationship between input stimulus and design or functional coverage to understand interesting states. The ML-enhanced application can then develop randomized vectors to reach those interesting states more efficiently. ML can use coverage as a proxy for the functional behavior of a run as it is trying to determine which behaviors are “interesting.” Xcelium ML technology helps to increase the bins that are hard-to-hit and rarely/not hit, in addition to providing stimuli distribution diagnostics and root-cause analysis. We all agree that long-latency bugs take a huge effort to track down. Anything that can reduce that latency from millions of cycles to just a few or less is excellent.
So, what do you do when you can achieve the same coverage in one-fifth of the time? The answer is quite straightforward – you spend 80% of the time you recover finding new bugs in your design. This is excellent news for the verification engineer. Finding bugs before tapeout is what verification is all about.
As with everything else, ML has found its way into verification. Its arms reach into nearly every aspect of verification—from static to formal to simulation to debug. Cadence is at the forefront of the effort to push the boundaries of what AI/ML can do in verification. The Xcelium ML App is one such example that can help you compress your regression and execute only meaningful simulation runs, expose hidden bugs, and increase the hit count of rare bins. You can enjoy even better results, up to 10X, if your environment is ML-friendly (meaning has a high degree of randomization in the input state space).
If you missed our previous blog, “Quest for Bugs – The Constrained-Random Predicament,” click here.
Anika Sunda is a senior product marketing manager in the Cadence System Verification Group. She has more than 13 years of semiconductor industry experience spanning product management, research development, and verification from prior roles at Synopsys and Agilent Technologies. Sunda holds a master’s from the IIIT Bangalore, India. She is currently responsible for business development and product marketing of Xcelium products, helping bring Cadence’s machine learning technology, Xcelium-ML, to the broader market.