Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Nvidia’s AI Summit starts tomorrow through Wednesday in Washington DC.
Nvidia talks Enterprise AI and then Industrial AI , these guys reach everywhere.
NVIDIA Collaboration
Hitachi Rail’s strategic collaboration with NVIDIA means HMAX digital services will now be accelerated by the NVIDIA IGX™ industrial-grade, edge AI platform, delivering robust edge AI computing on trains and assets across the railway ecosystem.
Acceleration of solution development via
NVIDIA’s AI frameworks
NVIDIA and Hitachi Digital’s software engineering experts
This combined expertise, along with that of Hitachi Digital’s AI COE and software engineering, is set to empower our customer base across 51 countries globally through faster, more powerful AI capabilities from edge to cloud.
NVIDIA + Hitachi Rail catenary and pantograph monitoring use case:
By using the?industrial-grade?NVIDIA IGX platform with?NVIDIA Holoscan for real-time sensor processing, the massive volumes of video data from train fitted Pantograph Cameras (pan cams) can be processed and managed at the edge (on the trains or infrastructure) in real?-time, with only relevant information sent back to the operational control centres.
Prior to the deployment of these AI enhancements, Pan Cam systems could take up to ten days to process all the data that the train collected in a single day.
With the new solution, the AI systems can?process the information in real-time, providing valued, timely insights about overhead line conditions.
Good punch line
Three surgeons on lunch break:
Surgeon 1: I like surgery on electricians. After you open them up, everything is color coded.
Surgeon 2: I like surgery on librarians. After you open them up, everything is in alphabetic order.
Surgeon 3: I like surgery on politicians. After you open them up, they have no spine and guts, and their head and ass are interchangeable.
Not to mention the street always looks ahead
Good for Nvidia
Data center finance leases
Buried within the footnote of its quarterly and annual reports, Microsoft lists the number of finance leases, mostly for data centers, that have yet to commence. The details include the amount of money it has contracted out for leases that it has not yet begun to use.
At the end of June, these finance leases that have yet to begin stood at a staggering $108.4 billion. To put that in context, Microsoft had total finance lease liabilities of $27.1 billion at the end of its fiscal 2024 (which ended in June), and its finance leases yet to be commenced at the end of fiscal year 2023 were $34.4 billion. So the amount of finance leases it has contracted out that have yet to begin has more than tripled in the past year.
These leases are expected to begin between fiscal years 2025 and 2030. They will have terms ranging from one year to 20 years.
So what does all this mean? Well, finance leases are typically long-term agreements by which the owner of an asset gives control of it to another party in exchange for payments. Usually at the end of the lease, the lessee (the party making payments) has the option to buy the asset for a nominal amount.
Microsoft has made it clear that these leases are for data centers, which means it has contracts in place to spend a whole lot of money on data centers in the coming years. Now some of this could be through partnerships with Oracle and CoreWeave, but this still leaves a lot of new data center space set to be added.
I went back and listened to Phil Panaro’s video on Nvidia, he was very positive about future growth he mentioned the extensive conversion for local,state and government from Web 2 to Web 3. Does anyone know what this means thanks
Loved that movie
Dan Ives always says Nvidia is in the 1st inning, the party goes till 4am their at 9pm. He needs some new lines he has worn those out although I agree with him. A colorful guy along with his suit jackets
Investing.com -- Morgan Stanley analysts provided their view on Nvidia (NASDAQ:NVDA)'s GB200 server rack production, noting that recent supply chain checks suggest no further delays in shipments.
Concerns had arisen in the market regarding Nvidia's Blackwell platform, specifically about supply and demand uncertainties, but the production process appears to be on track, according to the investment bank.
"Our latest supply chain checks suggest no further delay for GB200 server rack shipments," Morgan Stanley states.
They also noted that no leakage problems have been spotted, and cable connection issues can be resolved through manufacturing adjustments. Production rollout is scheduled for November 2024, with the first batch of shipments expected by the end of the year.
The bank highlights that while both NVL36 and NVL72 designs should hit the market by late 2024, "NVL72 should be the customer preference for high computing power density and better cost-effectiveness in the long term."
Moreover, Morgan Stanley says some tier-one server buyers will have access to the Blackwell platform around the year-end, while broader shipments to the rest of the market are likely to begin by the end of the first quarter in 2025.
They explain that due to the immense power consumption of GB200 server racks, however, most enterprise buyers may prefer HGX/MGX servers for the Blackwell platform.
Morgan Stanley's analysis suggests that Nvidia's supply chain remains strong, with growth expected in high-speed switches and large-scale 800G switch shipments projected for the fourth quarter of 2024.
"We see the supply chain continuing to progress to mass production toward year-end," writes Morgan Stanley.
While volatility may persist in the short term, the bank remains optimistic, particularly on companies like Wistron, Accton, and Hon Hai for their relationships with Nvidia and contributions to the AI hardware supply chain.
I couldn’t find link to copy there are a few articles this one is from Toms Hardware
Tech Industry Artificial Intelligence
xAI Colossus supercomputer with 100K H100 GPUs comes online — Musk lays out plans to double GPU count to 200K with 50K H100 and 50K H200
News
By Anton Shilov published September 4, 2024
Colossus only took a little bit over four months to build.
Happy times for OpenAI
(Image credit: Elon Musk on X / Twitter)
Elon Musk's X (formerly Twitter) has brought the world's most powerful training system online. The Colossus supercomputer uses as many as 100,000 Nvidia H100 GPUs for training and is set to expand with another 50,000 Nvidia H100 and H200 GPUs in the coming months.
"This weekend, the xAI team brought our Colossus 100K H100 training cluster online," Elon Musk wrote in an X post. "From start to finish, it was done in 122 days. Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200K (50K H200s) in a few months."
According to Michael Dell, the head of the high-tech giant, Dell developed and assembled the Colossus system quickly. This highlights that the server maker has accumulated considerable experience deploying AI servers during the last few years' AI boom.
Elon Musk and his companies have been busy making supercomputer-related announcements recently. In late August, Tesla announced its Cortex AI cluster featuring 50,000 Nvidia H100 GPUs and 20,000 of Tesla's Dojo AI wafer-sized chips. Even before that, in late July, X kicked off AI training on the Memphis Supercluster, comprising 100,000 liquid-cooled H100 GPUs. This supercomputer has to consume at least 150 MW of power, as 100,000 H100 GPUs consume around 70 MW.
Although all of these clusters are formally operational and even training AI models, it is entirely unclear how many are actually online today. First, it takes some time to debug and optimize the settings of those superclusters. Second, X needs to ensure that they get enough power, and while Elon Musk's company has been using 14 diesel generators to power its Memphis supercomputer, they were still not enough to feed all 100,000 H100 GPUs.
xAI's training of the Grok version 2 large language model (LLM) required up to 20,000 Nvidia H100 GPUs, and Musk predicted that future versions, such as Grok 3, will need even more resources, potentially around 100,000 Nvidia H100 processors for training. To that end, xAI needs its vast data centers to train Grok 3 and then run inference on this model.
Getting in early with a company can be a plus but a problem I see is true quantum computing is years away the little I have read up on it. The problem I see as time goes it will attract the big boys and everyone else so it won’t be the secret AI and Generative AI was a few years back. Just my opinion good luck to you
I couldn’t tell from your posts if you hold for 5 years you will be rewarded
I think there are only three longs on the board you, Jet and myself
Google LLC will spend $3.3 billion to build two new data centers in South Carolina and expand an existing cloud campus.
The Alphabet Inc. detailed the project on Thursday. It’s the latest in a string of 10-figure data center investments that the company has announced since the start of the year.
Cloud providers are investing heavily to expand their artificial intelligence infrastructure. In April, Insider reported that Microsoft Corp. plans to equip its data centers with 1.8 million graphics cards by year’s end. Google didn’t specify what hardware will be installed in its South Carolina facilities, but it’s possible the company will use the opportunity to grow the number of AI chips it can offer to cloud customers.
Google’s data centers run not only Nvidia Corp. graphics cards but also internally developed AI accelerators.
Amazon is looking to add an industrial park near Dulles International Airport to its network of Northern Virginia data centers.
...developing the nearly 60-acre Renaissance Park with four data centers.
The data center buildings would be constructed in phases, starting with a three-story data center building with a maximum height of 102 feet, a 6-acre electrical substation and a guardhouse....
Cloud service and infrastructure market hits $427bn - Synergy
ompared to H1 2023, the operational capacity of the hyperscale data center network has grown by 24 percent, with the size of future pipeline data centers growing by 47 percent.
On the hardware side, ODMs (original design manufacturers) account for a large portion of the market share, as hyperscalers opt for their own-designed servers.
Beyond the ODMs, Dell, Microsoft, Super Micro, and HPE lead. However, Nvidia is rapidly growing by selling directly to hyperscale operators.
“One interesting aspect of this is the way in which it is changing the structure of the supply side of the industry. Over the last ten years, ODMs have continued to eat up server market share, and now we see Nvidia’s explosive growth, which is largely fueled by sales to hyperscalers, either directly or indirectly. In the first half, revenues from Nvidia’s data center business unit far surpassed the combined revenues of Dell and HPE in data centers,” said John Dinsdale, a chief analyst at Synergy Research Group.
Jose video (jose is both sharp and knowedgeable on both Nvidia and AI, and technology in general. He has a Master's in Electronnics and was an engineer) states the 'the guts' of the H20 has the same 'guts' as another Nvidia chip which was recently dicontinued. Jose hypothesizes that those 2 chips might be the only Nvidia chips using those particular 'guts" (I forget the exact term he used), and both were discontinued in order to facilitate more Blackwell production. (Nvidia sells 12 billion USD a year of the H20 chips to China)..
Yesterday:
China is re-enforcing efforts to encourage domestic companies to favour homegrown artificial intelligence chips over Jensen Huang’s Nvidia advanced semiconductors.
This is to bolster its semiconductor industry and counteract the effects of US-imposed sanctions, according sources who spoke to Bloomberg.
In recent months, Beijing has issued informal directives, advising Chinese companies to reduce their reliance on Nvidia’s H20 chips, which are widely used in AI development.
3 days ago:
Nvidia has allegedly stopped taking orders for its China-specific H20 GPUs used in AI and HPC applications, reports Cailian News Agency citing a distributor source
---
As reported by Taiwan Economic Daily. It is said that NVIDIA has stopped accepting new orders for the H20 accelerator and hasn't specified a reason behind it,
---
Nvidia has stopped taking orders for its H20 chips since August, according to Chinese media outlet, CLS, citing distributors.
Curious what timeframe do you consider long with Nvidia?
I keep hearing that Nvidia will start delivering the first racks of its GB200 servers to major cloud-service providers in early December. I don’t know what timeline it takes to build those servers but you would think they would be delivering the chips to their suppliers in October sometime. November earnings call will be interesting Blackwell will be the big subject. The Street looks forward this candle will be lit towards the EOY
He was talking that besides the cloud providers there is only about 1% AI penetration among corporate, sovereign, military etc. good video
Watch the video if you get a chance he explains his 18 trillion market cap theory.
Was that the interview he did on Schwab it was around 7 minutes on YouTube. He was very convincing basically laid out Nvidia’s path besides the Hyperscalers.
Klein: "Would [Microsoft] sign a 20-year nuclear-power-purchase/supply agreement with Constellation Energy spending $1.6 billion alone to restart the facility that doesn't begin power [generation] until 2028 if they planned to slow or pare back current capex growth and AI investment? No!"
You know I saw that about Microsoft days back and never thought about it in this way, they are exactly correct. You don’t spend that kinda of money out that long then slow down and change your mind. There are so many data centers being built or planned here and abroad. Luckily our government and the private sector see the needs for the Grid and are being proactive to solve it. “A lot better than mandating electric cars without a charger in place”. All this so called worrying about AI “ROI” and slowing down will soon pass and then on to the next thing for the Street to worry about. Once Blackwell gets its footing all this crap will be in the rear view mirror and off to the races. All the traders trying to make 10 cents on their trades will be gone.
NVIDIA Corporation (NASDAQ:NVDA) provides graphics, computing and networking solutions. Jensen Huang, the CEO of the firm, made a surprise appearance at the recently held T-Mobile Capital Markets Day. Speaking alongside T-Mobile CEO Mike Sievert, Huang claimed that his company had fused signal processing and AI. He predicted that this was going to be a great new growth opportunity for the telecommunications industry. During the chat with Sievert, Huang underlined the importance of AI in shaping the future of telecommunications, particularly highlighting the role of AI-RAN in optimizing and scaling network performance. The NVIDIA Corporation (NASDAQ:NVDA) bigwig said that fusing radio computing and AI computing into one architecture allowed companies to apply AI models to optimize signal quality across diverse environments.
Google’s top executive confirmed the company is working on large-scale data centers that would use more than 1 GW of power. Sundar Pichai, CEO of Google and Alphabet, in a speech last week at Carnegie Mellon University... “We are now working on over 1-GW data centers, which I didn’t think we would be thinking about just maybe even two years earlier, and all of this needs energy, ” Pichai said during a talk in Carnegie Mellon’s Highmark Center as part of the university’s 2024-25 President’s Lecture Series. Pichai spoke on “The AI Platform Shift and the Opportunity Ahead,” as he focused his company’s advancements in AI and his vision for a future driven by AI.
It’s Nvidia that’s why
Nvidia, T-Mobile, Ericsson, and Nokia are teaming up to implement AI into networks. A new Nvidia buzz word “Nvidia AI Aerial” here is an excerpt from an article.
“T-Mobile CEO Mike Sievert said: “AI-RAN has tremendous potential to completely transform the future of mobile networks, but it will be difficult to get right. That is why T-Mobile is jumping in now to help lead the way with our partners.
“This collaboration between T-Mobile, NVIDIA, Nokia and Ericsson will truly define what is next in mobile networks in the 5G Advanced era and beyond, and drive real progress where it is needed.”
NVIDIA founder and CEO Jensen Huang said: “AI will reinvent the wireless communication network and industry — going beyond voice, data, and video to support a wide range of new applications like generative AI and robotics. NVIDIA AI Aerial is a platform that unifies communications, computing and AI.”
As long as the Fed lowered the rate before the asteroid hit nobody would care
Well said it is hard to believe it is already way over a year when this first started. They are going to have such an installed base in a few years and by the time everyone gets their fill there will be a new cycle starting. Once you have the market share it is almost impossible to make inroads unless Nvidia falters and I don’t see that happening.
I watched a CNBC video with the SandboxAQ CEO talking about Nvidia chips in healthcare industry. This was right up your alley they were talking LQMs (Large Quantum Models) versus LLMs
I thought it was a good article summarizing the Goldman conference for people that didn’t listen to webcast
“The first thing is to remember that AI is not about a chip. AI is about an infrastructure,” Nvidia (NVDA) Chief Executive Jensen Huang recently explained.
On Sept. 11, Huang presented at Goldman Sachs’s Communacopia + Technology Conference. He discussed Nvidia’s competitiveness, the Blackwell platform, Taiwan Semiconductor, and more.
Nvidia’s stock jumped more than 8% following Huang’s speech, a significant recovery from the sharp drop following its Q2 earnings report in August.
On Aug. 28, Nvidia released its fiscal second-quarter earnings report.
For the quarter that ended July 28, the company reported adjusted earnings of 68 cents a share, more than double the year-earlier figure and surpassing the analyst consensus estimate of 64 cents. Revenue reached $30 billion, up 122% from a year earlier, exceeding the anticipated $28.7 billion.
However, investors had anticipated more significant growth for the company, which led them to sell down Nvidia shares more than 15% within a week of the financial report.
Nvidia will start shipping Blackwell in Q4 and will scale it through the next year.
Bloomberg/Getty Images
A lot of good info in that post
Blackwell will make the earth change from an elliptical to parabolic orbit and this SP will never be seen again:)
Just a little FYI Jensen said at Goldmans conference they had 32,000 employees
And he said guys I would love to sell them to you but I’ve got this dam DOJ on my a$$
Sorry to hear that maybe you will change your mind later on, this board has changed it is not interesting except for a few IMO. I have a lot on ignore I don’t want to read their rhetoric, doom and gloom posts. To each their own, buy you some Nvidia and put it in your sock drawer. That’s what I did many years back and I’m always smiling. Good luck to you!!
Yes it was good, Colette usually does the technology conferences so to hear Jensen was special and it came at an opportune time.