2021 and still haven't started my 100 bagger move (Spectra7 SEV, Lightwave Logic LWLG)
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Just saw the latest video I uploaded not long ago. Bonnie says (09:20)
* 2.5M is all they need in datacenters.
* In some cases with our Chinese customers we went to 5.5M.
So, it Nvidia wants 5M, they can have that.
If Nvidia news hits, im adding another 100k
Added 44000 shares
On point 4, your broadcom question.
Just a thought. By using tomahawk 5 the ACC will give the operator a better signal ( and be able to reach 5M or more)
Proto,
On your point 3
https://www.spectra7.com/gc1122-2
Thanks for sharing
Info given by S7?
I’ve skipped 99% of the posts here. Im glad i didn’t miss yours. Great thinking Marco.
Investing is a marathon
Yes, 6 months from now I finally hope the industry is doing that ‘expensive’ upgrade
I listened to the whole Macom presentation. I learned for the second time that it’s important to have matching speeds between optical and electrical systems.
For eg a 400g signal ACCs converts to 400g optical and visa versa. So optical modulators will be very important, imo
Great posts Robert. Macom sees the market were we will be the leader. Awesome times ahead.
Thank you for reporting back. I haven’t been able to give it a better listen due to vacation
Bonnie said we will need to do a raise when we will get a 100M order.
I got kicked out of the webinar
On demand (maybe later available)
https://www.nvidia.com/en-us/about-nvidia/webinar-portal/
Check out minute 10:20
Fabulous post. Wel done
New presentation. Check out the new AI slide.
https://www.spectra7.com/Spectra7CorporateOverview-2023-07-18.pdf
I share the Nvidia match
Proto, I’ve got an answer back on your dB loss remark.
Hi Steve,
Per our engineering team:
Regarding the slides referenced, loss is generally measured at the Nyquist frequency. For 100G per lane Ethernet (106.25Gb/s/lane to be exact), the Nyquist frequency is 26.5625GHz. The Ethernet specification limit of <19.75dB loss at 26.56GHz.
Passive cables struggle to reach 2 meters (most are limited to 1.5 to 1.8m) while staying within the 19.75dB loss limit. The bulk of the market is centered around the 2m length, but we’ve heard much interest in going up to 2.5 and 3m, especially with thinner cables.
Spectra7’s ACC solutions at a 3m 800G QSFP-DD assembly is approximately 50% below the standard.
Hopefully this answers your question.
Best regards,
Bonnie
Video on the product. Expensive and can take up a bit of time to get you fix.
https://www.facebook.com/israelmakingtheworldabetterplace/videos/525258347646043/
The people behind Dror Ortho-Design Ltd
https://pitchbook.com/profiles/company/489371-14#team
Marwan Albarghouti
Director of Product Reliability at Lightwave Logic
https://www.linkedin.com/in/marwan-albarghouti-16a77019/
Spectra7 (SPVNF)
June 7, 2023
Spectra7 creates silicon products that enable copper cables to be longer, thinner, lighter and run at higher performance levels. Spectra7 pioneered and is the leader in Active Copper Cables for markets including data centers, head-mounted displays for both virtual reality and augmented reality, and 4K and 8K panels. The Corporation’s family of products features a patented signal processing technology used in the design of “active” cables and specialty interconnects which enable longer, thinner, lighter and higher performance interconnects. The Corporation holds approximately 55 patents relating to its products. Spectra7’s existing customers include global tier-one consumer electronics and hyperscale data center infrastructure companies. As the need for bandwidth increases in all areas of our technology life, the demand for Spectra7’s unique patented signal processing technology is set to grow dramatically.
https://ldinv13.sequireevents.com/view?session_id=be6e7667-7dfd-44a3-8dad-fdbaa1398b82
You have the wrong stock.
https://ir.sonomotors.com/stock-information
Apple Vision Pro has a cable
What he said.
On the ihub app you don’t see it, that’s why. Go to a browser to see it. I agree, pretty confusing.
Here’s the link
Incredible video. Needs multiple viewings
Product revenue increased 87% year-over-year, primarily due to the ramp of our active electrical cable solutions. License revenue grew 28% year-over-year from $25 million to $32 million.
Throughout fiscal '23, we had several highlights across our product lines.
For active electrical cables or AECs, we continue to lead the market, Credo pioneered during the last several years.
https://capedge.com/transcript/1807794/2023Q4/CRDO
Bill Brennan
Sure, sure, absolutely.
So I think generally, I think that AI applications will create revenue opportunities for us across our portfolio.
I think the largest opportunity that we'll see is with AEC.
However, optical DSPs, there will definitely be a big opportunity there. Even linecard PHYs chiplets, even SerDes IP licensing will get an uplift as AI deployments increase.
So maybe I can start first with AEC.
Now it's important to kind of identify the differences between traditional compute server racks, which is kind of commonly referred to - use the front-to-end network, so basically a nick [ph] tour connection, the tour up to the leaf and spine [ph] network.
The typical compute rack would have 10 to 20 AECs in rack, meaning in rack connections from nick to tour. And you highlight the leading-edge lane rates today for these connections with compute servers is 50 gig per lane.
Within an AI cluster, in addition to the front-end network, which is similar, there's a back-end network referred to as the RDMA network. And that basically allows the AI appliances to be networked together within a cluster directly. And if we start going through the map, this back-end network has 5 to 10x the bandwidth as the front-end network.
And so the other important thing is to note within these RDMA networks, there are leaf spine racks as well.
And so if we look at the - if we look at one example of a customer that we're working with in deploying, the AI plant track itself will have a total of 56 ADCs between the front end and back-end networks. Each lease fine rack is a class track or at this aggregated chassis, which will have 256 ADCs.
And so when we look at it from an overall opportunity for AEC, this is a huge uplift in volume. The volume coincides with the bandwidth.
Now lane rates will quickly move certain applications will go forward at 50 gig per lane others will go straight to 100-gig per lane.
And so we see probably a 5x plus revenue opportunity difference between the typical - if you were to say apples-to-apples with the number of compute server racks versus an AI cluster.
So it's kind of extend of extend into optical. There is also a typically large - there's typically a large number of AOCs in the same cluster.
So you can imagine that the short in rack connections are going to be done with ADCs. These are three meters or less. But these appliances will connect to the to the back-end lease spine racks, these disaggregated racks, all of those connections will be AOCs. Those are connections that are greater than three meters.
And so if we look at this, this is all upside to, say, a traditional compute deployments where there's really no AOCs connecting rack to rack.
Okay? So when we look at the overall opportunity, we think that the additional AEC opportunity within an AI cluster is probably twice as large as twice as many connections as AOCs, but the AOC opportunity for us will be significant in a sense that AOCs represent the most cost-sensitive portion of the optical market.
And so it's also a lower technology hurdle since the optical connection is well defined, and it's within the cable.
So this is a really natural spot for us to be disruptive in this market. We see some of our planning on deploying with 400-gig AOCs. Others are planning to go straight to 800-gig AOCs.
So we view - AEC is the largest opportunity. Optical DSPs for sure will get an uplift in the overall opportunity set.
But also, I think that if we look at Tesla, as an example, that's an example of where, as they deploy, we're going to see a really nice opportunity for our chiplets that we did for them for that DOJoe [ph] supercomputer. And it's an example of how AI applications are doing things completely differently, and we view that long term, this will be kind of a natural thing for us to benefit from.
We can extend that to SerDes IP licensing. Many of the licenses that we're doing now are targeting different AI applications. And also, don't forget Linecards, the opportunity for the network OEMs and ODMs is also increasing. And of course, Linecards are something that go on those switch line cards that are developed.
So generally speaking, I think that AI will drive faster lane rates. And we've been very, very consistent with our message that as the market hits the knee in the curve on AI deployments, we're naturally going to see lane rates go more quickly to 100-gig per lane. And that's where we really see our business taking off.
So we're getting a really nice revenue increase from 50 gig per lane applications, but we really see acceleration this 100-gig per lane happens. And especially when you start thinking about the power advantages that all of our solutions offer compared to others that are doing similar things. Does that might have been more than you were looking for, but...
Congratulations guys!! We did it!