InvestorsHub Logo
Followers 483
Posts 61243
Boards Moderated 17
Alias Born 09/20/2001

Re: tedpeele post# 149584

Monday, 07/31/2023 8:57:38 AM

Monday, July 31, 2023 8:57:38 AM

Post# of 197471
You are being disingenuous, as I pointed out in this post from JANUARY THIRD OF THIS YEAR:

Red Herring alert:

But, given the shareholder letter's emphasis on the company continuing to perform more reliability testing THEMSELVES at their own new 9,000 sq ft lab, what assurances do we have that the PDK really isn't a hollow achievement - that the end users really are performing ongoing testing and willing to design their chips with Lightwave's polymer modulators, and not still waiting for more results back from Lightwave's labs (and maybe also other things that aren't Lightwave -related) before making that committment(sic)?



Testing is not a one and done thing, the excerpt below is from a company called Test Tooling Solutions Group. If you knock around the site for just a few minutes you will get an idea of what I'm talking about.

“In the last six to eight weeks I’ve spoken to six customers about this subject,” said Hitesh Patel, director of product marketing for the Design Group at Synopsys. “The trends we see are that design sizes are increasing and the number of scenarios is increasing, so you need to test in different modes, such as idle or operational. At older nodes, static analysis was sufficient. Now you need dynamic analysis, but the analysis results are only as good as the vectors you create. We’re seeing users trying to get vectors right out of emulation to use in voltage drop analysis. But if you have a 100 million-instance design, some tools take seven to eight days to run. You may not have time to fix all of the issues. The more you can do early in the process, the higher the likelihood that it will get done.”

That’s one of the reasons there has been such a big push into “shift left,” where more is done earlier in the design cycle. It’s also one of the reasons why all of the major EDA vendors are seeing steady growth in Emulation, prototyping, and other tools that can link the front end of design more tightly to the back end.

Bigger trends
Reliability also can be viewed on a macro level. Consolidation is being driven by rising costs of design, as well as the need to bring different skill sets into the design process. It’s uncertain whether that will have a direct bearing on reliability, but it certainly could provide the right level of resources for extensive verification and debug of designs if combined companies decide to put their resources there. So far, that has not been determined.

New approaches to packaging are another unknown. As Moore’s Law becomes more expensive to follow, many companies have begun developing chips based on fan-outs and 2.5D architectures. While the multi-chip module approach has been around since the 1990s, putting the pieces together using interposers and high-bandwidth memories is new. How those designs perform over time is unknown.

“This question can be answered in theory, but not in practice,” said Mike Gianfagna, vice president of marketing at eSilicon. “Based on that theory, a silicon interposer is stable and it’s mostly passive. Ideas such as metal migration and warpage are well understood, and the interposer does not add unreliability. But it’s not any less certain with advanced nodes, where you have tunneling effects, and gates are a certain number of atoms wide. The bigger issue is what happens to the speed of any of these chips over 10 years. The effects will become more pronounced at higher temperatures. That’s becoming more of the issue to contend with.”

Nick Heaton, distinguished engineer at Cadence, agrees. “The big question is how these designs handle progressive degradation. In automotive, we’re seeing a lot more functional safety tooling for tolerance of single or multiple failures. There’s a long way to go in this space, though. What does 28nm look like over five years? What we can do is maximize coverage at all levels. But use cases are still the real problem. You cover that with all the permutations you think you can get away with.”

Heaton noted that some of the teams developing advanced SoCs are comprised of hundreds of engineers. But he said that even with those large teams, there are still limited resources. “They have to be smart about what they’re testing. They run a certain number of low-level tests for hardware, for software, for hardware accelerators, for the operating system. That’s where we are at the moment.”

So how far do companies extend technology? That may be a much more interesting question as SoCs and new technologies begin getting adopted into safety- and mission-critical markets and have some history behind them.

“The big question is always how hard you’re pushing the technology,” said Drew Wingard, CTO of Sonics. “The closer to the edge, the less reliable it will be. We are on the edge of some very interesting tradeoffs over packaging complexity, known good die, and economics, meaning who’s to blame. What is the pain-to-gain ratio? The gain normally has to be very high, but experience can help change that.”

Whether it adds good metrics for reliability remains to be seen. At this point, however, there are too many unknowns to draw conclusions about what goes wrong, what really causes it to go wrong, and who’s responsible when it does.



https://tts-grp.com/blogs/719-top-six-types-of-testing-tools-you-need-to-know-about


Original post in response to your B.S.

https://investorshub.advfn.com/boards/read_msg.aspx?message_id=170864976


Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent LWLG News