Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
NP I assume the first person either quit or was fired in October? Definitely skewed towards Golar people.
The Boys had to open the roof to get the smell out
Jab what I was referring to is that back then you went by Jab15 now Jab65 …… 50 lighter
Recently, NVIDIA ceased the shipment of high-performance GPUs to countries like China. The new U.S. restrictions aim to curb the utilization of powerful accelerators for specific tasks, particularly in machine learning and its potential use for military applications. This led to the export ban of cards such as H100, A100, and their 800 variants, originally designed to navigate previous restrictions. Notably, the updated rules shift the focus from interconnect communications between GPUs to compute power, impacting not only data-center GPUs but also gaming cards like RTX 4090.
In response, NVIDIA has today introduced a fresh series of data-center GPUs designed to comply with the new restrictions. The HGX H20 and L20 series are now officially available to Chinese customers, featuring reduced compute power compared to their predecessors. The H20 GPU boasts 96GB of HBM3 memory with a memory bandwidth of 4.0 TB/s. Interestingly, that’s higher bandwidth than the ‘global’ H100 with 3.6 TB/s.
On the other hand, the L20 GPU, based on an AD102 GPU, is equipped with 48GB of GDDR6 memory. These solutions intentionally limit compute capabilities to ensure GPUs do not exceed 4800 TOPS performance.
NVIDIA HGX H20, L20, L2 Specs, Source: ITHome
The HGX H20, a data-center GPU optimized for clusters of 8 GPUs, features NVLink connectivity and demands 400W of power. The L20 and L2 GPUs, operating on PCIe interface, offer memory options of 48GB and 24GB.
Reports indicate that NVIDIA has initiated chip sampling to partners, with orders set to commence from November 16th, coinciding with the full implementation of U.S. restrictions. NVIDIA anticipates the first shipment to take place in December.
Not to mention hotdawg and sir90, that was when Jab was 50 lighter.
The problem was they forecast 18% revenue in 2024 versus the streets 24% I believe
His power will be turned off
Good luck with that
“Nvidia (NASDAQ:NVDA) intends to announce three new chips for China, weeks after the company was restricted by the U.S. from selling two high-end artificial intelligence chips and another top gaming chip to Chinese companies, Reuters reported citing local news outlet STAR Market Daily.
The chips are known as HGX H20, L20 PCIe and L2 PCIe and that the U.S. tech giant could announce them on Nov. 16 at the earliest, the media outlet said citing people with knowledge of the matter.”
It’s FERC selling they found out they were lied to again
Kelly I had no idea you were in this originally, just curious why you didn’t sell back then. I don’t know the history of TGLO whether the stock price just dropped suddenly and people got caught or if it slowly eroded to nothing. I’ve been in both scenarios over the years. The worst scenario I think is the frog in the cold water on the stove, he never knew he got cooked until it was too late.
I was surprised to see Supermicro is just now shipping this, would be nice to know price.
NEWS PROVIDED BY
Super Micro Computer, Inc.
18 Oct, 2023, 16:58 ET
Supermicro's NVIDIA GH200 Superchip-Based Server Platform Increases AI Workload Performance Using a Tightly Integrated CPU and GPU and Incorporates the Latest DPU Networking and Communication Technologies
SAN JOSE, Calif., Oct. 18, 2023 /PRNewswire/ -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution manufacturer for AI, Cloud, Storage, and 5G/Edge, is announcing one of the industry's broadest portfolios of new GPU systems based on the NVIDIA reference architecture, featuring the latest NVIDIA GH200 Grace Hopper and NVIDIA Grace CPU Superchip. The new modular architecture is designed to standardize AI infrastructure and accelerated computing in compact 1U and 2U form factors while providing ultimate flexibility and expansion ability for current and future GPUs, DPUs, and CPUs. Supermicro's advanced liquid-cooling technology enables very high-density configurations, such as a 1U 2-node configuration with 2 NVIDIA GH200 Grace Hopper Superchips integrated with a high-speed interconnect. Supermicro can deliver thousands of rack-scale AI servers per month from facilities worldwide and ensures Plug-and-Play compatibility.
I wouldn’t bet against that the wager should be 2024 or at all.
“A CEO of Inflection, Mustafa Suleyman A CEO of Inflection, Mustafa Suleyman stated along with Naveen Rao who owned a startup that was trying to compete with Nvidia discovered that it was impossible because of the number of hardware and software learning solutions that made it literally impossible to catch up with due to the complexity of the AI learning that Nvidia worked on for so long that it had built a moat around AI itself. Dan Newman an analyst who looked at Nvidia and it's competitors stated customers are willing to wait 18 months rather than going someplace else, because there is no place else that has the AI learning capacity that Nvidia does. In his opinion along with these others I mentioned all believe that Nvidia has a solid moat in AI for a long time. Other reports have since come out which seem to back up the major market share Nvidia has and will maintain going forward. stated along with Naveen Rao who owned a startup that was trying to compete with Nvidia discovered that it was impossible because of the number of hardware and software learning solutions that made it literally impossible to catch up with due to the complexity of the AI learning that Nvidia worked on for so long that it had built a moat around AI itself. Dan Newman an analyst who looked at Nvidia and it's competitors stated customers are willing to wait 18 months rather than going someplace else, because there is no place else that has the AI learning capacity that Nvidia does. In his opinion along with these others I mentioned all believe that Nvidia has a solid moat in AI for a long time. Other reports have since come out which seem to back up the major market share Nvidia has and will maintain going forward.”
Here is a question from Cowen analyst that I thought was the best question. I think for most companies you want the whole system not just chips. From Nvidia you get the whole enchilada just plug it into the wall.
“Matt Ramsay -- TD Cowen -- Analyst
Thank you very much. Good afternoon. Lisa, I wanted to maybe ask the AI question a little bit differently, not just focused on your GPU portfolio but more broadly. I think one of the big surprises to a lot of us is how quickly the AI market changed from accelerator cards to selling full servers or full systems for your primary competitor.
And they've done a lot of innovation not just on GPU but on CPU on their own custom interconnect, etc. So, what I'd like to hear a little bit of an update on is just how you think about your road map going forward across CPU, GPU, and networking and particularly the networking part as you look to continue to advance your AI portfolio. Thanks.
Lisa Su -- President and Chief Executive Officer
Yeah. Thanks, Matt. I think it's an important point. What we're seeing with these AI systems is they are truly complicated when you think about putting all of these components together.
We are certainly working very closely with our partners in putting together sort of the full system, CPU, GPUs, as well as the networking capability. Our Pensando acquisition has actually been really helpful in this area. I think we have a world-class team of experts in this area, and we're also partnered with some of the networking ecosystem overall. So, going forward, I don't think we're going to sell full systems, let's call it, AMD-branded systems.
We believe that there are others who are more set up for that. But I think from a definition standpoint and when we're doing development, we are certainly doing development with the notion of what that full system will look like. And we'll work very closely with our partners to ensure that that's well defined so that it's easy for customers to adopt our solutions.”
Coming back from Snap’s poor earnings and broad market sell off. Hulu definitely helped
Yes I do hopefully it goes good
“Blackwell B100 AI GPUs getting closer to launch, as Nvidia secures supply orders from Wistron and Foxconn
Hon Hai Chairman Liu Yangwei and Nvidia CEO Jensen Huang (Image Source: UDN)Hon Hai Chairman Liu Yangwei and Nvidia CEO Jensen Huang (Image Source: UDN)
With Nvidia selecting Wistron as its main substrate supplier and Foxconn as module packager, the next gen Blackwell AI GPU chips are ready to enter the production phase. A Q2 2024 launch is looking very likely.
Bogdan Solca, Published 10/31/2023 🇫🇷 🇪🇸 ...
AI GPU Nvidia
Earlier this month, Nvidia revealed that its next gen Blackwell AI GPUs should release sometime in 2024, more than half a year in advance compared to previous roadmaps. It seems Team Green intends to meet the ever increasing demand for AI accelerators and is planning to release new generations on a yearly basis starting with 2024, which may suggest that gaming GPUs could also be refreshed every year. According to reports published by CTEE and UDN, Nvidia could launch the Blackwell B100 AI GPUs by early summer 2024, as the chips is already entering the supply chain certification stage.
For this phase, Nvidia is looking to secure key production components such as substrates and package materials from supply chain partners. CTEE reports that Nvidia selected Wistron over Hon Hai (Foxconn) as the main substrate supplier for the Blackwell GPUs. Hon Hai was supposed to be a backup supplier, but its yield rates made Nvidia reconsider and place all the orders through Wistron.
Hon Hai still remains an important partner, however, but it will be responsible with the GPU module packaging, according to UDN. Nvidia plans to move future H100 and B100 GPU module production to Hon Hai’s factories in Mexico, U.S. and Taiwan in order to avoid the restrictions imposed on China.
The Blackwell B100 AI GPUs should feature a multi-chip module design coupled with HBM3e VRAM, but the gaming GPUs expected to launch in late 2024 are believed to retain the monolithic design and may include GDDR7 VRAM.”
You are correct:)
Was that the soccer ball, never mind that was Wilson
Farther to the left or right whichever you prefer
Don’t forget they were going to help Delfin expedite FID last June I believe.
“With Nvidia getting ready in November to report quarterly earnings and revenues, I get more excited at the fact that there are so many uses of AI that one can't count the ways. Let me say this that Nvidia is so far ahead of everyone in AI that it's going to be difficult for anyone to catch them. There product line in both hardware and software is unbelievable. But with AI Nvidia is going to be able to bring new products on line for as far as the eye can see because of the numerous applications of AI. So those that wonder what the stock price is going to be in the future it's hard to say at this point because of this fact. I look at the fact that even Druckenmiller stated he could see Nvidia at 2.1 trillion market cap on 100 billion in revenues with the current margins at 70%. With those numbers that he has it doesn't include the numerous applications involving new products which are coming involving AI. I will say this that your looking at a company that is generations and I say generations ahead in there research and development involving AI products. Currently I like how everyone talks about the AI chips that are propelling the stock higher. Wait till everyone see's other products coming to market that will show the same market share for Nvidia involving the new products.”
That brings up a good point, who will be the gas provider for Delfin. Does the purchaser of the LNG select the gas provider or does Delfin.
NP you have to wonder what kind of men the ceo & coo are, they’re not young full of piss and vinegar barn burners. So are they looking to sell the company, we have no idea what the signed contracts contain. I sure don’t know what a golden parachute at this point would look like probably a lot of holes. I hope it is just taking a lot longer to align those ducks and if they RM this puppy they can have all the options they want.
That very well may be true maybe they have already done it I doubt it. If they were going to do it that way why even bother to be quoted in articles announcing future FID. Although I guess one could say nothing they have said has been realized to date.
The gas
Yes I know they’ve said that along with a lot of things. They won’t do it without enough gas for 2nd FLNG. They had enough gas for 1st FLNG for quite awhile no FID. Always waiting once they get that waiting for next thing.
It is evident they are having trouble signing Devon on the dotted line. It is hard to think Fred Jones is involved if he was serious about the project the CEO & COO would have been fired long ago. They won’t FID with just one FLNG they need Devon. Now the Middle East will be the new Covid excuse. So far the shell is being kept current something to hang our hats on.
Who knows the man in red might be the only one right here only time will tell in fact time is all we have. Four years of it before we have to go back to FERC
In this case unfortunately it is:
Trick NO Treat
Yeah they’re unbiased
“AMD's problem is by the time the Mi300 comes to market NVDA will be selling Blackwell B100 or maybe even X, which will be 4 generations ahead of the Mi300. So it will be the same story as gaming, AMD will have a solution but it will be inferior to NVDA's solutions. The largest companies will still buy from NVDA, they will have to in order to stay competitive. When a company is competing to stay on top (cloud company for ex) they will only buy the best to give their customers the best experience. AMD will pick up some customers, but NVDA will control a large majority of the market. In the gaming market, NVDA sells to all the serious hardcore gamers and controls a majority of the market....AI will be no different.”
Unfortunately we’ve had 5 years soon to be 6 years of that, ready for RM personally.
News
Nvidia May Move to Yearly GPU Architecture Releases
By Anton Shilov published October 10, 2023
Nvidia's Blackwell set to arrive in 2024, its successor to hit
Nvidia
(Image credit: Nvidia)
In a bid to maintain its leadership in artificial intelligence (AI) and high-performance computing (HPC) hardware, Nvidia plans to speed up development of new GPU architectures and essentially get back to its one-year cadence for product introductions, according to its roadmap published for investors and further explained by SemiAnalysis. As a result, Nvidia's Blackwell will come in 2024 and will be succeeded by a new architecture in 2025.
Advertisement
But before Blackwell arrives next year (presumably in the second half of next year), Nvidia is set to roll out multiple new products based on its Hopper architecture. This includes the H200 product, which might be a re-spin of the H100 made for enhanced yields, or just higher performance, as well as GH200NVL, which will address training and inference on large language models with an Arm-based CPU and Hopper-based GPU. These are set to come rather sooner than later.
Nvidia
(Image credit: Nvidia)
As for the Blackwell family due in 2024, Nvidia seems to prep the B100 product for AI and HPC compute on x86 platforms, which will succeed H100. In addition, the company preps GB200, which is presumably Grace Hopper module featuring an Arm CPU and a Hopper GPU, targeting inference as well as GB200NVL, an Arm-based solution for LLM training and inference. Also, the company is planning B40 product, presumably a client GPU-based solution for AI inference.
Sponsored Links
The Most Historically Accurate Movies, Ranked
DailyChoices
Read More
In 2025, Blackwell will be succeeded with an architecture designated with the letter X, which is probably a placeholder for now. Anyhow, Nvidia preps X100 for x86 AI training and inference as well as HPC, GX200 for Arm inference (Grace CPU + X GPU), and GX200NVL for Arm-based LLM training and inference. In addition, there will be X40 product — presumably based on a client GPU-based solution — for lower cost inference.
For now, Nvidia leads the market of AI GPUs, but AWS, Google, Microsoft as well as traditional AI and HPC players like AMD and Nvidia are all prepping their new-generation processors for training and inference, which is why Nvidia reportedly accelerated its plans for B100 and X100-based products.
Advertisement
To further solidify its positions further, Nvidia has reportedly pre-purchased TSMC capacity and HBM memory from all three makers. In addition, the company is pushing its HGX and MGX servers in a bid to commoditize these machines and make them popular among end users, particularly in the enterprise AI segment.
“Meanwhile, success in this endeavor involves overcoming substantial technical barriers. A key challenge lies in the existing heavy investments in the x86 computing architecture, which has been a staple in software development for PCs. Transitioning to Arm-based CPUs requires addressing compatibility issues, as code developed for x86 chips will not directly run on the Arm ISA, necessitating porting software from x86 to Arm.”
I’m sure not a tech guy except for what I read but Jensen has said on the last few earnings calls that the CPU has taken a general purpose position due to the GPU. To me that is a lower margin business. I guess I will have to wait for November earnings call to hear from horse’s mouth. I’m sure he will be asked
I find it hard to believe Nvidia is going to compete with AMD and Intel with ARM based CPU’s for PC’s. Nvidia seeks out areas that are hard not everyone can accomplish. There would have to be a good reason that it would benefit it them. Not to mention X86 is not compatible with ARM so all the legacy software isn’t supported by ARM. I don’t get it
I said earlier they were out signed by the competition and they were whether bigger, smaller, brown, blue or otherwise.
Not sure how you get that Venture signed 5.95 mtpa Cheniere 4.7 mtpa NextDecade 6.4