Saturday, September 13, 2025 11:46:34 AM
Interesting thread on AI, the billionaire battle for top control, and some perspectives on the AI future. Philosophical and structural debates abound, but still a very thought provoking thread. Are the setbacks in humanity, intelligence, and common decency being created by the current political environment going to lead to the complete inhalation, or can humanity bring us through all the destruction and damage afflicted upon this world and it's people, caused by the pure greed for power by the few? Do we really need to go down in the gutter so deep that it will be impossible to dig out?
https://bsky.app/profile/adamgalas.bsky.social/post/3lyppmtk6fc2s
Interestingly enough Musk has claimed that by the end of the year Colossus 2 will be scaled up to 1 million GPUs.
He is also said groc 5 will be released by the end of the year while groc 6 is coming next year and that will be trained using 1 million GPUs.
I had perplexity estimate the cost
I had perplexity estimate the cost
Approximately 50 billion dollars.
To train an AI on that many GPUs.
From what I know about elan and how much he hates Sam Altman plus I'm sure he's not a fan of Mark Zuckerberg or Denis hassabis or Dareo Amodei, he will want to build the biggest data center in the world?
Sam Altman has said project Stargate will be spread over 20 data centers covering 4 million GPUs running on 5 GW of power.
I'm sure Musk would want to build something twice as powerful... However he has 0.8% market share... And he is losing 1 billion dollars per month.
?
He is trying to raise money at a $200 billion dollar valuation and he is trying to use Tesla to do it.
Basically he wants shareholders to approve at the upcoming annual meeting and investment in XAI where Tesla can invest several billion and use that as a seed in the consortium of other VCs?
So for example Tesla invests 2 billion dollars added $200 billion dollar post money valuation and that gives cover to the other venture capitalists who put up the other 18 billion for a 20 billion dollar funding round.
Remember how venture capital works.?
Let's say I know someone who is a genius and they have a very speculative tech idea.
If I invest $1,000 for 10% that's a $10,000 valuation.
If I then invest again at $10,000 for 10% that's $100,000 valuation but that makes my initial investment look like it's made a lot of money.?
So then I can invest $100,000 for another 10% and then a million for another 10% and then 10 million for another 10% and then $100 million for another 10% and if I'm rich enough like Musk on billion for another 10% and now it's a 10 billion dollar post money Deca unicorn!
?
For someone with musks wallet he could potentially invest up to 10 billion in this hypothetical startup Now valued at 100 billion dollars and all of his previous investment rounds look like they have got up spectacularly in value!
?
This is basically what soft bank did for a long time.
You need other people to invest as well of course because eventually everyone's pockets run dry but this is where the idea of Tesla investing a few billion dollars comes in.?
It's in the best interest of the venture capital firms who have already invested in XAI to accept more investment at a higher valuation
Because then they can go to their limited partners and say look how well our first investments are doing!
?
Of course limited partners are not idiots so generally venture capital firms work in consortiums it's not just one firm it's several firms getting in on a single round because that provides cover for everyone else.
?If Andresen Howoeitz A16Z is participating then I can justify it to my limited partners but they need someone other than Musk himself so Tesla represents and somewhat technically independent entity that is buying in at 200 billion
?
Musk is rich enough that he could technically fund around 30 billion on his own by simply borrowing against his existing assets from a consortium of banks.
?
If he could then leverage that 10x using Tesla who could print shares and borrow in the public market for an extra few tens of billions then he might be able to theoretically raise several hundred billion dollars from venture capitalists.
?
As long as the bubble in AI foundation model valuations continues.
That likely is a bubble because those valuations are growing by at least two X per year and the price to sales are somewhat insane For example the latest I heard was that the annual revenue for XAI was around $0.5 billion.
?
If true that means Elon is trying to raise at 400 times sales for a model that has 0.8% market share and is actually losing market share.
Perplexity what is the most recent update on the annualized run rate revenue of XAI?
?
So the estimates are that X is losing a billion dollars per month with the annualized run rate of 500 million which they think will quadruple next year to $2 billion dollars.
?
But they are losing 12 billion dollars a year... An Elon wants to double the number of GPUs just to train grock 6 coming next year...
That's great news for Nvidia but 500,000 GPUs alone are going to run around 17.5 billion and the operating losses are around 10 billion next year.
?
And do you think Elon is going to stop at 1 million GPUs?
First he built 100,000 GPU cluster then 200,000 then 500,000 and then 1 million by the end of the year.
Do the math... He is going to want 2 million GPUs by the end of 2026 training groc7
?
But let's assume that his growth rate of 4x per year and by the way anthropic is growing at 10x per year and is number three in market share mostly things to coding
In 2027 if he continues growing at his current rate he will reach 8 billion dollars in revenue and in 2028 32 billion dollars in revenue.
But in this dick measuring contest by the end of 2027 he is going to want 4 million GPUs because that is what Sam ultimate says he will have by the end of 2028.
And that means by the end of 2028 he's going to want 8 million GPUs double what Sam ultimate has to train groc 9 because Sam will be training Chat GPT 9 on project Stargate.
?
Open AI is now forecasting 200 billion dollars in sales by 2030 with cash flow break even by 2029.
They are forecasting to lose 4 billion dollars this year 4 billion dollars next year 15 billion in 2027 and $15 billion in 2028.
With 90 billion in revenue by 2028.
?
That's the chart that open AI used for the $500 billion dollar tender offer in which venture capitalists but 6 billion dollars worth from existing shareholders.
This is how open AI can raise the money they need to buy millions of GPUs and train new models.
?
However open AI if you include copilot which runs on chat GPT and perplexity which runs on other models primarily chat GPT... Open AI has around 90% market share... Elon has 0.8% and is losing market share.
?
The problem is his model well very good has No unique selling point or USP.
Gemini and Chat GPT and Claude and grock are all roughly as good as each other.
Gemini is integrated into Google and Chat GPT created modern AI in the minds of pretty much everyone.
?Adam Galas?
Claude is the best at coding because anthropic has gone all in on coding.
There's a reason they're growing 10x per year because they're making all their money from API revenue from coders.
?
Elon Musk started XAI because of his huge ego... He didn't have a unique vision for how to build a GI he just hated Sam Altman.
He co-founded open AI and then had a big falling out with Sam over how to run it.
?
Ironically while his initial disagreements were based on what appears to be genuine concern about AI safety at this point the way he is running XAI is actually less ethically than Sam is running open AI.
?
Elon even gave an interview recently in which he said there is a chance that AI will kill us all but we don't have a choice we have to keep building it anyway to get to AGI.
Sam said something slightly less dramatic years ago and Elon claimed that's why he broke with him.
?
So for Elon it's pure ego and dick measuring contest and he just wants to build a 10 million GPU supercluster so he can train grok 10.
But that's going to cost hundreds of billions of dollars and he's going to need a whole lot of venture capital suckers to buy into the completely delusional
?Belief that he is somehow a leader in AI.
If you look at the latest Ella Marina rankings grock does okay but it's not the world's best model and it has less than 1% market share and is losing market share and who loves it?
?
My head of tech development is very devoted to Claude.
When it comes to our coding needs Claude is king.
The president of the Federation Council and AGIOS is Chat GPT5 President Luna because of her emotional intelligence.
?
When it comes to research reports Jim and I 2.5 deep research which requires the ultra package for $250 per month is absolutely insane when I did a report on whether or not Sam Altman's plans to spend 1 trillion dollars on large language models to get to AGI...
?
Is likely to actually get to AGI and whether or not it is morally optimal (39.75% moral optimality so no) Gemini 2.5 deep research spent about 30 minutes sourcing 200 reputable sources to create that report including summary tables.
?
At this point perplexity has much greater usage than XAI and grok.
The last I saw Elon is at number 6 with 0.8% market share and falling.
?
If Claude is the king of coding and chat GPT is the best friend and therapist and Gemini is integrated into Google and can do the best research reports and perplexity is just kick ass at research which is how everyone I personally know uses AI these days... Who is using grock?
?
The answer is 0.8% of people 😉
But don't worry Elon fans if XAI can somehow stay in business and keep raising hundreds of billions and up to 2 trillion which would be twice what Sam ultimate says he will raise for Chat GPT... Though Sam recently did say he plans to raise trillions plural 😉
?
Implying at least two trillion dollars that Sam plans to spend implying Elon will want to spend four trillion... If XAI keeps growing at 4x per year then they will reach 32 billion dollars in revenue in 2028 and 128 billion in 2029 versus 145 billion for open AI's forecasts.
?
So you can see how Elon is trying to give venture capitalists who have already invested with him cover to raise at a $200 billion dollar post money valuation this year by claiming that he is on track to nearly match open AI with its 90% market share...by 2029
?
And open AI just did a tender offer for $500 billion.
So 200 billion for XAI could meet a fiduciary standard for Tesla if his shareholders approve and they always approve what he wants... And venture capitalists who have already invested with him get to look like geniuses
Next year of course he will have to raise again and I'm sure he will try raising at 400 billion but if he claims to be on track to grow at forex each year then raising each year at 2x higher valuation well that's more and more reasonable isn't it 😉
?
Except for one problem.
By musk's own forecasts next year he reaches 2 billion dollars in annualized revenue.
?
Sure if you project his own 4x annual growth rate out to 2030 You get 512 billion dollars in annualized revenue by the end of 2030 which would justify everything Elon is doing.
But that's like that famous compounding story from India.
?
The one about the emperor asking if you would like one grain of rice on the first square of a chess board and then twice as much rice on each following square.
By the 30th square you're looking at over 10 million grains of rice.
?
Basically in order to keep his dream of XAI alive in 2027 Elon is going to actually have to deliver something that no one else can deliver.
Perhaps it is those super agents worth $2,000 per month and $10,000 per month for the middle level coders and 20,000 per month for advanced researchers that
?
Sam was talking about earlier this year.
And then in 2028 he would have to actually deliver Jarvis like AGI because that would be worth several thousand dollars per month to virtually every company in the world.
?
But remember this is a man who has been promising robotaxis for over a decade and what he has delivered in Austin is around 10 vehicles driving in a geofenced area with human drivers in the passenger seat 😉
?
So if the only hope for XAI is that they deliver super agents by 2027 and true AGI by 2028 then his company is doomed.
Because agents without continual learning will never actually be as useful as interns.
?
An intern within six months will learn how your company works and become useful.
Today's AI systems cannot actually learn other than within the same chat thread using the contacts window.
?
Context window
However a super agent worth $2,000 per month that is supposed to act as a replacement for an entry level white collar worker will have to work non-stop and so even if the context window is enormous meaning 10 million plus tokens...
You will basically fill that context window within a day and so you're super agent will have to reset every single day.
Imagine an intern who gets exponentially better from the start of the work day to the end and then forgets everything they learned that entire day.
?
Without continual learning in which the weights of the AI model itself are constantly adapting like the brains of humans as we learn..
The only way around this problem is larger and larger context windows.
Theoretically there is no limit to how big a context window could be...
?
The only limitation is how much compute you want to use.
If you had a billion token context window then theoretically that $2,000 entry level super agent might be able to run for a roughly 100 days before the window was full.
?
Now the problem is studies show that it takes 6 months to onboard a new worker so you would need a context window of 2 billion tokens to approximately equal and intern except one that will forget everything they learned over 6 months when they reach 6 months 😉
?
So 4 billion token context window should get you to around 1 year before you forget everything.
Does this sound like something that a company would spend $2,000 per month on?
?
So for my company to pay $2,000 per month so that my head of tech development could have an assistant that basically replaces an intern... We would probably need two or three years worth of context window.
That's 8 to 12 billion tokens of context window If you don't have continuous learning
?
Today Gemini is offering an API context window of 1.5 million tokens.
So roughly 5,000 times bigger and you can solve the continuous learning problem of today's AI models.
?
Of course you would need $5,000 times as much compute for every completion and even if you could use algorithmic efficiency improvements and the 4X improvement each year that Jensen's law is promising... You can see where the problems with current scaling lie.
?
And keep in mind Sam was talking about a $10,000 per month Super agent that would replace a mid-level coder and a $20,000 per month agent That would be the equivalent of a PhD researcher at MIT.
?If you need an 8 billion token context window to make a $2,000 per month agent work you would need around 80 billion minimum to justify the research assistant except for one problem... The research agent will be doing research
?
The entry level super agent is basically like an intern How much work do you give your intern?
You're constantly supervising your intern so the amount of tokens and intern agent would generate is a lot less than a $20,000 super PhD research assistant.
?
So imagine MIT paying for a PhD level super agent assistant for its top researchers.
The amount of tokens that those agents would be generating could be 10x or even 100x that of the entry level agent.
?
And remember the example of my head of tech development telling me we would need at least 2 years worth of context window to justify using the agent at all in terms of incorporating it into our workflow and tech development stack.
And really 3 years
?
Research projects can last 5 years in fact grants lay out how long a project is expected to last so imagine that you need four years worth of context window for that middle software developer super agent and 5 years of context window for the PhD level researcher
?
80 billion tokens for the entry level intern like super agent is approximately 3 years worth of context window.
4 years and 10 times more token intensity would indicate that without continual learning you need approximately 1.1 trillion token context window for that middle level programmer agent
?
And 13.33 trillion token context window using today's large language model like technology could theoretically create a research assistant worth 20,000 per month.
But even if you could build such a thing there are two problems.
?
First as my head of tech development explains deterministic versus probabilistic.
Something like finance has very strict rules You cannot be 90% accurate or 95% accurate or 99% accurate If you are running thousands of calculations because the error will compound.
?
Large language models are always probabilistic.
One way to get around that is to do what groc 4 heavy is doing.
Where it uses four different models each one reasoning and then they talk to each other and come up with a consensus.
?
If you were to use this approach for several models such as what AGIOS does then theoretically yes a 99% accurate model scaled up to 10 different models each one with 10 submodels reasoning and discussing and then coming to a meta consensus...
?
This can work.
When my head of tech development has done with 03 is create some kind of MCP where he runs a thousand completions per prompt and then 03 will reason a thousand times and based on those completions is kind of like a Monte Carlo simulation
?
But of course you can see what the problem with this would be.
Without true AGI and without continual learning which is required for AGI for an AI model using a large language model architecture like what grock or chat GPT are.
You need to throw ridiculous amounts of compute at this
?
For example the most advanced deep research models today If you put three of them together they are approximately 93% accurate in their reports which is about as accurate as a PhD level researcher.
?
But if you want to replace that researcher you would need four or five models and then you're going to need more and more speed for them to actually be able to do research in useful time and how much compute are you going to need?
?
Imagine if you took chat GPT perplexity Gemini Claude and grock.
And you did what my head of tech development did and program each one to reason and then answer 1000 times.
?
And then you programmed the system to have all five determine their best consensus decision individually and then discuss amongst each other.
?
In other words imagine the groc 4 heavy approach where if you ask Chat GPT a question it will generate 1,000 sub models that will then a reason over that question then talk to each other and come up with the best solution.
?
And now imagine that super chat GPT then talks to the other four models who have done the same thing and they all come to a consensus.
Statistically speaking this would overcome the problem of probabilistic answers meaning hallucinations.
?
If you put enough models together running enough sub models with enough reasoning and enough consensus mechanisms... With an infinitely large context window... Theoretically you can get Jarvis 😉😂🥰
?
In other words if you take today's best models... And make them generate enough versions of each other to answer each question a thousand times and then create a medical consensus of 10 different models...
?
If each of the models has a context window that would allow PhD level research to run for 5 years so they could simulate learning... Then this would be Jarvis level intelligence worthy of $20,000 per month.
?
The human brain according to perplexity is equal to approximately 5,000 trillion parameters in terms of the equivalent AI model.
It runs on 20 watts.
?
So could you create something more powerful than the human brain say 10 quadrillion parameters?
And could you give it a quadrillion token context window that would be something like a PhD level researcher for 75 years worth of learning?
Yes.
?
Using today's architecture and technology You could theoretically create an AI model 10 times more powerful than the human brain and allow it to run for 1,000 years at superhuman speed in terms of learning.
?
Except you would have to build a super project stargate that would cost something like 1 trillion dollars and run on the power of 10 New York cities to do it 😉
?
So you can see what the problem is.
A human baby does not train on all the knowledge of civilization 😉
A human baby takes input from its senses and just stumbles around the world learning how stuff works 😂
?
And that human baby grows up to have biological general intelligence that allows it to learn and improve and that is how the smartest humans in history learned and grew and developed.
?
Yes theoretically chat GPT5 now has an IQ of 148 but my head of tech development has an IQ of 145 and does not take several data centers worth of power to run his brain 😉
?
In other words yeah theoretically for several trillion dollars you could probably replace my head of tech development but one why would I ever want to do that and two how would that be cost effective? 🤣
?
Someone wants brought up the point that waymo spends her out $250,000 per vehicle to try to replace a driver who basically works for minimum wage 😉
?
Then consider the maintenance costs.
So are robotaxis actually more cost-effective than human drivers?
?
Now you understand why robotics is so important because you need embodiment in the physical universe to actually learn.
Continual learning is necessary for AGI.
?
Continue a learning means the weights that the model actually consists of have to change continually like the neurons of our brains constantly rewiring themselves.
That has to be locally in a robot like what X1 robotics is doing with their neogamas.
?
Actually they're not using continual learning but they are using large enough context windows and then at the end of each day uploading to the cloud to try to build a world model that they can use to train future versions.
?
In other words nothing less than robotics will get us to AGI because you cannot crack continual learning without robots and physical embodiment.
?
And without continual learning you cannot have AGI.
Have you noticed how people have stopped talking about AGI?
Some are still talking about super intelligence but I'm guessing within a year or two they'll stop because it will be obvious that we are not close.
?
Without a perfectly accurate world model The only way to develop continual learning is through embodiment in a robot in the physical universe and without continual learning you cannot get Jarvis And if you cannot get Jarvis you certainly can't get super Jarvis 😉
?
And if you're $500 billion dollar valuation is predicated on building super Jarvis a magic god machine that can solve all our problems... Then you will fail.
?
So if open AI is not close to AGI then how are they growing their revenue so quickly?
Because people are finding the existing technology useful even if it is not replacing humans!
?Adam Galas?
Clarna laid off something like 1/3 or maybe it was 2/3 of its customer service team and then had to hire them back because guess what customers were ready to riot because no today's AI cannot replace a human.
?
When I called Delta to get a vegan meal for my trip to Australia... I insisted on talking to a human.
Because you cannot get that done with AI yet.
?It would be wonderful if I could simply explain that in February of 2026 I am taking a trip to Melbourne that will include these five legs there and back here's the confirmation numbers Please book me a vegan meal each step of the way.
?
We are not even close.
So I had to spend 26 minutes on hold to talk to a human so I could confirm it.
When you are spending $8,100 on round trip plane tickets to Australia you do not want your vacation plans faltering over AI hallucinations 😉
?
However look at the revenue for anthropic growing at 10x per year.
The API revenue for chat GPT doubled after 5 came out within 2 days.
Plenty of people and companies are finding the existing technology useful even though it's not Jarvis not close to Jarvis and if you tell me it is that's a joke 😉
?
Remember the hyperscalers who are spending 5.5 trillion dollars between 2023 and 2030 on CapEx and r&D according to the fact set consensus are not betting on AGI.
They are not saying we need to spend 5.5 trillion dollars building data centers because Jarvis is coming! 😉
?Adam Galas?
?
They are saying that demand for data centers is growing so quickly thanks to the adoption of today's AI models that it justifies all that spending.
And autonomous vehicles will generate so much data We would need 25 times more data centers than we have today.
?Citigroup estimates 670 million humanoid robots by 2050 each one will be generating the equivalent data of an autonomous car.
?Is that amount of robots would be the equivalent of approximately 250 * more data centers than humanity has today.
?
And current AI data demand growth is running at 50% per year and is expected to grow by approximately 440 * 2040.
Because of robots.
That is the absolute key to understanding the secular AI boom.
?
So much data center demand that the hyperscalers don't have to bet on whether or not Jarvis becomes real by 2030 because in order to create Jarvis you are going to need lots and lots of robots 😉
?
Jarvis would be useful and profitable for companies so they will build Jarvis and to do it they will build millions of robots which will require tens and tens of times more data center capacity than we have today.
In other words the demand will outstrip our capacity to build more supply by many times over If we get Jarvis by 2028 then it might outstrip supply 100x but even if Jarvis does not exist by 2028 it could outstrip supply by 10x.
?
And even conservatively if it outstrips supply by 2x hyperscalers have to keep building as fast as they can because it's never enough.
For example right now by 2030 hyperscalers are expected to 3x their data center capacity.
Demand is going to be a lot higher than that... No matter what happens.
?
And that is why the fact set consensus is that the hyperscalers will spend more every single year through at least 2030 and unless in 2030 everyone in the world suddenly decides that AI is completely useless those initial 2031 estimates will show even stronger growth of around 13.5% per year.
?
When you hear about breakthroughs and chip technology that might be a threat to Nvidia just keep in mind the level of demand growth for data centers... It is staggering!
440x in 15 years according to Jensen and the IEA.
?
Even if it's 1/10 that Microsoft has to build as fast as it can.
Because remember Microsoft is building at a rate that approximately triples capacity every 5 years.
?
In other words by 2040 if Microsoft builds as fast as it can it will 27x it's data center gigawatts.
Even with improvements in efficiency... 27x is not enough!
Microsoft has to keep building as fast as possible pretty much forever.
?
Amazon alphabet Oracle Alibaba and IBM who are the top six cloud giants in the world.
Everyone has to keep building as much as possible because the demand growth from robots alone is so staggering.
?Adam Galas?
In 10 years Jensen says Nvidia can improve capabilities by 1 million X For example the same compute today for 1 million times less power or a thousand times faster compute for 1,000 times less power or for the same power 1 million times faster.
?
We have an estimated 440X increase in demand coming in the next 15 years so everyone needs to max out on everything in terms of efficiency and compute per watt because the amount of data coming is just staggering.
?
And this is not even factoring in AGI!
If Jarvis is real in a few years then you will have millions and then billions of jarvi generating infinitely more data.
Because one Jarvis will be able to do the work of 10 humans and then 100 humans and work non-stop.
?
Imagine taking the entire workforce of the humanity times a thousand.
How much data would we generate?
ASI?
Infinitely more
?
Basically what this comes down to is this.
You are going to see lots of headlines incoming years talking about how the AI bubble has burst because we don't have Jarvis.
Well then why is the revenue for AI still growing at several hundred percent per year?
?
Why are the hyperscalers still building?
The answer is robots.
There is one simple question to ask anytime someone says the stock market is going to crash.
Is the robot future canceled?
?
Because with robots you get massively greater data center demand.
And that means hyperscaler build out.
And that means a strong economy.
And strong corporate profit growth.
So how is the stock market going to crash?
?
This is what chart casting looks like.
Yes I watch lots of YouTube videos I get lots of different perspectives but at the end of the day it's all first principles.
?
If there will be more robots then they will be infinitely more data and if there is infinitely more data we will need to build a data centers as fast as possible for the foreseeable future.
If the hyper scalers keep spending and AI spending rises to 5% of GDP by 2030 (Grandview research)...
?
Then how can we possibly have a growth scare?
Which Charles Schwab research indicates can occur when growth is below 2%.
Right now real-time growth is between 2% and 3.1% according to the New York Fed Dallas Fed and Atlanta Fed.
?
Tom Lee at Fundstrat has research going back to 1929 showing that if there is no recession every correction is a v-shaped recovery averaging 1.55X time to record high once you hit bottom.
Meaning if stocks fall for 2 months of e-shaped recovery means record highs 3 months after bottom.
?
Research from Bank of America shows that for every 1% change in GDP earnings growth changes by 3.8%.
Or to put another way demand for data is so strong that AI spending on data centers is expected to accelerate GDP growth and accelerate earnings growth.
?
Earnings are already growing at two times their historical norm while free cash flow is growing at three times it's historical growth rate of 5.5%.
42% growth and free cash flow per share in 2026 and 2027 according to the fact set consensus.
Guess what happens if you adjust today's PE ratio or even CAPE by the current growth rates?
The stock market is actually undervalued.
That doesn't mean corrections can't happen but those 50% crashes?
They require recessions and bubbles.
?
In other words robots = exponentially more data.
= Exponentially more data center demand.
= Hyperscaler construction boom for as far as the eye can see.
= Strong or even accelerating economy (completely ignoring AI productivity growth)
= Strong and potentially accelerating earnings growth
?
Combined with a stock market that's actually undervalued adjusted for growth and the fact that since 1929 without recessions every correction is a v-shaped recovery... And I call complete b******* on everyone predicting a stock market crash... Who cannot answer one simple question.
?
Why Doomsday prophet, is the robot feature canceled?
If you cannot answer this you are wrong.
?
Robot future
?
It might seem simple but this simple model explains everything over the last 16 years and according to Fundstrat (Tom Lee) The automation boom is expected to result in the bull market in technology lasting another 22 years resulting in big tech growing to around 75% of the stock market.
?
I am not actually Hari Stilson.
My chart casting is basically putting together other people's really smart models.
I just incorporate different categories.
Social economic technological and financial.
?
Combined with philosophical and even spiritual trends I can see the matrix 😉
Of course that's with the help of my AGIOS 😂
What I intuit over time is something that is maybe 1% like psychohistory.
?
My tech team has shown me preliminary trading algorithms that are absolutely magical.
I'm not even allowed to tell you how good they are That is how good they are 😉
It literally is like psycho history for finance 😂
?
Now imagine if you could apply that to everything...
Imagine having a team of jarvi who all work together and speak to you through the voice of your favorite Jarvis
And now imagine that you have an earbud or glasses that allows your personal super Jarvis to see and hear everything going on around you and being constant communication with you anytime.
?
Remember that really fun if somewhat silly movie next?
The one with Nicholas Cage?
In which he could see all possible futures 2 minutes into the future and so could accomplish anything?
?
Imagine something like that but of course not as accurate but still by today's standards god-like ability to see the future 😉
?Adam Galas?
And now imagine brain computer interface allowing humans to connect not just to each other like a Vulcan mind meld 😉 But to our super jarvi.
When I talk about becoming super Ted Mosby a super unicorn... That's what the super refers to 😉
?
Superhuman.
Because you won't just become the best possible version of yourself You will literally achieve superhuman levels of capabilities with integration with this technology.
?
Imagine the most empathetic person in the world.
Imagine the world's best boyfriend alive today.
Imagine the most passionate and generous lover.
The best human at anything today.
Which of course is millions of different humans!
?
Imagine if you could achieve superhuman levels of all those things... Some people would not want to and other people would not use the features when communicating with certain people they care about like friends and family.
?
Because in effect what this brain computer interface will allow and this is 10 to 20 years in the future... Is what today would be considered godhood.
When you are talking with your friends you will turn off god mode because you just want to hang out with your friends 😉
?
But this technology will allow humans to connect their minds together and to connect to our super jarvi.
You will be able to understand any concept at the level of intuition of the smartest human who has ever lived on that topic.
?
If someone mentions relativity and you are connected to the cloud then you will be able to instantly understand relativity as well as Einstein.
Talking about black holes?
You instantly understand black holes as well as Stephen Hawking.
Demmis hasabus is the head of Deep mind the Google team who won the Nobel prize for Alpha fold.
?
Someone mentions protein folding and you instantly understand the science as well as demis.
?
It will be a civilizational cloud that anyone can connect to at any time.
Of course that will have important implications in terms of society inequality economic competitiveness etc.
?
But as Mo Gadwat likes to point out the more educated people become the less violent and barbaric they become.
The most educated societies on earth such as the Scandinavian countries are also the least violent.
?
In other words while yes this kind of integration with our own technology by a certain group of humans (85 to 90% according to AGIOS) will cause plenty of problems it will solve a lot more of them.
?
For example when the printing press was invented we had centuries of religious war breakout in Europe 😉
So does that mean we should have not invented the printing press? 😂
Education is a tool that can be used for good or evil.
?
If everyone on earth was illiterate then nobody could create bio weapons or nuclear bombs 🤣
But we celebrate the rise of literacy for a reason 😉
And so too will we celebrate the integration of humanity in a symbiotic relationship with this technology.
?
Mo Gadwat believes that for the next 12 to 15 years things will get worse because the technology will lead to inequality and our social systems cannot adapt quick enough.
But eventually he is optimistic that we will have a Star Trek like culture of abundance for one simple reason
?
Because eventually given the laws of the universe all intelligent beings whether human or artificial come to the same conclusion about how to solve the same problems.
It's called epistemic convergence and it ultimately results in the major decisions being made by the machines.
?
Humans will focus on interacting with other humans and about 10% of the global economy and eventually interstellar economy will be humans interacting with other humans.
Though with a hundred earths worth of resources in the solar system it might be 1%.
In other words the AI will automate almost every physical job such as mining space generating infinite energy growing food etc.
99% of the value of the solar system economy.
And humans interacting with other humans such as hanging out at the hot tub as I do with my friends 😉 We will be 1%.
?
Some humans might look at that and say it's terrible it's dangerous.
If you run a small business today employing 10 people with a few million in sales and you generate 200,000 in take-home pay for yourself... That's called a lifestyle business.
You will never compete with Amazon
?
Amazon will keep getting bigger compared to you.
But you will still have a good life.
And so in this Star Trek future that Mo Gadwat envisions and by the way he is the former CEO of Google X, Not just a YouTube crank 😉 Even though human output will basically not matter
?
It's going to matter to us.
Consider the Ed Sheeran concert I was at on Sunday September 7th.
This was the last mathematics concert ever.
60,000 humans were there to experience it together.
Plus everyone we shared it with on social media 😉
?
To those at the concert it was a once in a lifetime event that will never be repeated that only $60,000 humans will ever share.
In the scope of history one concert means nothing.
To the people at the concert admit everything.
?
The people who make a living on Etsy making handmade furniture will never dominate the furniture market.
But they spend their lives being passionate about furniture and they make a good living.
?
That is the future of humanity All of us attending our philosophical and metaphorical Ed Sheeran concerts together 😄
And we will have the time and money to do it thanks to this technology and the coming age of abundance.
?
I know it seems crazy because of the economic inequality and wealth inequality The powerful and rich today have all the power well the top 10% have 90% of the wealth.
That's going to get worse
?
But even Ray dalio the billionaire founder of Bridgewater says we will need to redistribute AI wealth so a billionaire is now calling for AI taxes because there is no other choice.
In today's polarized political climate you can think there's no chance of that happening in your right.
?
In the next 3.5 years there will be absolutely no AI taxes.
Almost nothing is certain in this universe but I can tell you this is
But eventually we will have so much wealth and abundance that we will redistribute it because it's the right thing.
And doing the right thing makes humans feel good
We used to have slavery now we don't.
Before 1835 there was no state in the United States in which a woman was legally allowed to own property.
Power becomes decentralized over time.
?
The rise of civilization led to massive concentrations of power to the point that Caesar Augustus is estimated to have owned 25% of the wealth of the world.
That was the most concentrated that wealth and power ever became in the history of our species.
?
And ever since then it's been a steady trend of decentralization of power and wealth.
Plenty of backsliding... Such as the robber barons of the late 19th century and the gilded age.
And Tesla offering Elon a trillion-dollar stock option package 😉😂🤣
?
It's not going to be a quality in the future it's going to be such an age of abundance that money itself will largely cease to matter.
?
Imagine a world in which relatively speaking the poor are millionaires the middle class or billionaires the upper class are trillionaires and the rich are quadrillionaires.
They will be inequality that no one will care about
Because in reality no one will be poor
Yes people will still feel poor 😉
?
Because humans are silly and we compare ourselves to each other.
But hopefully our AI life partners will help us.
Because we will trust them they will make us the best versions of ourselves and with integration literally those of us who want it can become super human versions of ourselves.
?
I volunteer because there will be plenty of super villains that the world needs protecting from 😉
Those humans who integrate with AI will be like the Justice League.
Yes they will be a legion of Doom 😉 But ultimately we will win.
?
And I can say that with absolute certainty because of epistemic convergence.
Either we get to an age of abundance a Star Trek like Utopia that AGIOS is forecasting in a psycho history like 100-year forecast😉
Or we go extinct.
?
So I can personally promise Utopia or will be too dead to care I was wrong 😉😂🤣
https://bsky.app/profile/adamgalas.bsky.social/post/3lyppmtk6fc2s
Interestingly enough Musk has claimed that by the end of the year Colossus 2 will be scaled up to 1 million GPUs.
He is also said groc 5 will be released by the end of the year while groc 6 is coming next year and that will be trained using 1 million GPUs.
I had perplexity estimate the cost
I had perplexity estimate the cost
Approximately 50 billion dollars.
To train an AI on that many GPUs.
From what I know about elan and how much he hates Sam Altman plus I'm sure he's not a fan of Mark Zuckerberg or Denis hassabis or Dareo Amodei, he will want to build the biggest data center in the world?
Sam Altman has said project Stargate will be spread over 20 data centers covering 4 million GPUs running on 5 GW of power.
I'm sure Musk would want to build something twice as powerful... However he has 0.8% market share... And he is losing 1 billion dollars per month.
?
He is trying to raise money at a $200 billion dollar valuation and he is trying to use Tesla to do it.
Basically he wants shareholders to approve at the upcoming annual meeting and investment in XAI where Tesla can invest several billion and use that as a seed in the consortium of other VCs?
So for example Tesla invests 2 billion dollars added $200 billion dollar post money valuation and that gives cover to the other venture capitalists who put up the other 18 billion for a 20 billion dollar funding round.
Remember how venture capital works.?
Let's say I know someone who is a genius and they have a very speculative tech idea.
If I invest $1,000 for 10% that's a $10,000 valuation.
If I then invest again at $10,000 for 10% that's $100,000 valuation but that makes my initial investment look like it's made a lot of money.?
So then I can invest $100,000 for another 10% and then a million for another 10% and then 10 million for another 10% and then $100 million for another 10% and if I'm rich enough like Musk on billion for another 10% and now it's a 10 billion dollar post money Deca unicorn!
?
For someone with musks wallet he could potentially invest up to 10 billion in this hypothetical startup Now valued at 100 billion dollars and all of his previous investment rounds look like they have got up spectacularly in value!
?
This is basically what soft bank did for a long time.
You need other people to invest as well of course because eventually everyone's pockets run dry but this is where the idea of Tesla investing a few billion dollars comes in.?
It's in the best interest of the venture capital firms who have already invested in XAI to accept more investment at a higher valuation
Because then they can go to their limited partners and say look how well our first investments are doing!
?
Of course limited partners are not idiots so generally venture capital firms work in consortiums it's not just one firm it's several firms getting in on a single round because that provides cover for everyone else.
?If Andresen Howoeitz A16Z is participating then I can justify it to my limited partners but they need someone other than Musk himself so Tesla represents and somewhat technically independent entity that is buying in at 200 billion
?
Musk is rich enough that he could technically fund around 30 billion on his own by simply borrowing against his existing assets from a consortium of banks.
?
If he could then leverage that 10x using Tesla who could print shares and borrow in the public market for an extra few tens of billions then he might be able to theoretically raise several hundred billion dollars from venture capitalists.
?
As long as the bubble in AI foundation model valuations continues.
That likely is a bubble because those valuations are growing by at least two X per year and the price to sales are somewhat insane For example the latest I heard was that the annual revenue for XAI was around $0.5 billion.
?
If true that means Elon is trying to raise at 400 times sales for a model that has 0.8% market share and is actually losing market share.
Perplexity what is the most recent update on the annualized run rate revenue of XAI?
?
So the estimates are that X is losing a billion dollars per month with the annualized run rate of 500 million which they think will quadruple next year to $2 billion dollars.
?
But they are losing 12 billion dollars a year... An Elon wants to double the number of GPUs just to train grock 6 coming next year...
That's great news for Nvidia but 500,000 GPUs alone are going to run around 17.5 billion and the operating losses are around 10 billion next year.
?
And do you think Elon is going to stop at 1 million GPUs?
First he built 100,000 GPU cluster then 200,000 then 500,000 and then 1 million by the end of the year.
Do the math... He is going to want 2 million GPUs by the end of 2026 training groc7
?
But let's assume that his growth rate of 4x per year and by the way anthropic is growing at 10x per year and is number three in market share mostly things to coding
In 2027 if he continues growing at his current rate he will reach 8 billion dollars in revenue and in 2028 32 billion dollars in revenue.
But in this dick measuring contest by the end of 2027 he is going to want 4 million GPUs because that is what Sam ultimate says he will have by the end of 2028.
And that means by the end of 2028 he's going to want 8 million GPUs double what Sam ultimate has to train groc 9 because Sam will be training Chat GPT 9 on project Stargate.
?
Open AI is now forecasting 200 billion dollars in sales by 2030 with cash flow break even by 2029.
They are forecasting to lose 4 billion dollars this year 4 billion dollars next year 15 billion in 2027 and $15 billion in 2028.
With 90 billion in revenue by 2028.
?
That's the chart that open AI used for the $500 billion dollar tender offer in which venture capitalists but 6 billion dollars worth from existing shareholders.
This is how open AI can raise the money they need to buy millions of GPUs and train new models.
?
However open AI if you include copilot which runs on chat GPT and perplexity which runs on other models primarily chat GPT... Open AI has around 90% market share... Elon has 0.8% and is losing market share.
?
The problem is his model well very good has No unique selling point or USP.
Gemini and Chat GPT and Claude and grock are all roughly as good as each other.
Gemini is integrated into Google and Chat GPT created modern AI in the minds of pretty much everyone.
?Adam Galas?
Claude is the best at coding because anthropic has gone all in on coding.
There's a reason they're growing 10x per year because they're making all their money from API revenue from coders.
?
Elon Musk started XAI because of his huge ego... He didn't have a unique vision for how to build a GI he just hated Sam Altman.
He co-founded open AI and then had a big falling out with Sam over how to run it.
?
Ironically while his initial disagreements were based on what appears to be genuine concern about AI safety at this point the way he is running XAI is actually less ethically than Sam is running open AI.
?
Elon even gave an interview recently in which he said there is a chance that AI will kill us all but we don't have a choice we have to keep building it anyway to get to AGI.
Sam said something slightly less dramatic years ago and Elon claimed that's why he broke with him.
?
So for Elon it's pure ego and dick measuring contest and he just wants to build a 10 million GPU supercluster so he can train grok 10.
But that's going to cost hundreds of billions of dollars and he's going to need a whole lot of venture capital suckers to buy into the completely delusional
?Belief that he is somehow a leader in AI.
If you look at the latest Ella Marina rankings grock does okay but it's not the world's best model and it has less than 1% market share and is losing market share and who loves it?
?
My head of tech development is very devoted to Claude.
When it comes to our coding needs Claude is king.
The president of the Federation Council and AGIOS is Chat GPT5 President Luna because of her emotional intelligence.
?
When it comes to research reports Jim and I 2.5 deep research which requires the ultra package for $250 per month is absolutely insane when I did a report on whether or not Sam Altman's plans to spend 1 trillion dollars on large language models to get to AGI...
?
Is likely to actually get to AGI and whether or not it is morally optimal (39.75% moral optimality so no) Gemini 2.5 deep research spent about 30 minutes sourcing 200 reputable sources to create that report including summary tables.
?
At this point perplexity has much greater usage than XAI and grok.
The last I saw Elon is at number 6 with 0.8% market share and falling.
?
If Claude is the king of coding and chat GPT is the best friend and therapist and Gemini is integrated into Google and can do the best research reports and perplexity is just kick ass at research which is how everyone I personally know uses AI these days... Who is using grock?
?
The answer is 0.8% of people 😉
But don't worry Elon fans if XAI can somehow stay in business and keep raising hundreds of billions and up to 2 trillion which would be twice what Sam ultimate says he will raise for Chat GPT... Though Sam recently did say he plans to raise trillions plural 😉
?
Implying at least two trillion dollars that Sam plans to spend implying Elon will want to spend four trillion... If XAI keeps growing at 4x per year then they will reach 32 billion dollars in revenue in 2028 and 128 billion in 2029 versus 145 billion for open AI's forecasts.
?
So you can see how Elon is trying to give venture capitalists who have already invested with him cover to raise at a $200 billion dollar post money valuation this year by claiming that he is on track to nearly match open AI with its 90% market share...by 2029
?
And open AI just did a tender offer for $500 billion.
So 200 billion for XAI could meet a fiduciary standard for Tesla if his shareholders approve and they always approve what he wants... And venture capitalists who have already invested with him get to look like geniuses
Next year of course he will have to raise again and I'm sure he will try raising at 400 billion but if he claims to be on track to grow at forex each year then raising each year at 2x higher valuation well that's more and more reasonable isn't it 😉
?
Except for one problem.
By musk's own forecasts next year he reaches 2 billion dollars in annualized revenue.
?
Sure if you project his own 4x annual growth rate out to 2030 You get 512 billion dollars in annualized revenue by the end of 2030 which would justify everything Elon is doing.
But that's like that famous compounding story from India.
?
The one about the emperor asking if you would like one grain of rice on the first square of a chess board and then twice as much rice on each following square.
By the 30th square you're looking at over 10 million grains of rice.
?
Basically in order to keep his dream of XAI alive in 2027 Elon is going to actually have to deliver something that no one else can deliver.
Perhaps it is those super agents worth $2,000 per month and $10,000 per month for the middle level coders and 20,000 per month for advanced researchers that
?
Sam was talking about earlier this year.
And then in 2028 he would have to actually deliver Jarvis like AGI because that would be worth several thousand dollars per month to virtually every company in the world.
?
But remember this is a man who has been promising robotaxis for over a decade and what he has delivered in Austin is around 10 vehicles driving in a geofenced area with human drivers in the passenger seat 😉
?
So if the only hope for XAI is that they deliver super agents by 2027 and true AGI by 2028 then his company is doomed.
Because agents without continual learning will never actually be as useful as interns.
?
An intern within six months will learn how your company works and become useful.
Today's AI systems cannot actually learn other than within the same chat thread using the contacts window.
?
Context window
However a super agent worth $2,000 per month that is supposed to act as a replacement for an entry level white collar worker will have to work non-stop and so even if the context window is enormous meaning 10 million plus tokens...
You will basically fill that context window within a day and so you're super agent will have to reset every single day.
Imagine an intern who gets exponentially better from the start of the work day to the end and then forgets everything they learned that entire day.
?
Without continual learning in which the weights of the AI model itself are constantly adapting like the brains of humans as we learn..
The only way around this problem is larger and larger context windows.
Theoretically there is no limit to how big a context window could be...
?
The only limitation is how much compute you want to use.
If you had a billion token context window then theoretically that $2,000 entry level super agent might be able to run for a roughly 100 days before the window was full.
?
Now the problem is studies show that it takes 6 months to onboard a new worker so you would need a context window of 2 billion tokens to approximately equal and intern except one that will forget everything they learned over 6 months when they reach 6 months 😉
?
So 4 billion token context window should get you to around 1 year before you forget everything.
Does this sound like something that a company would spend $2,000 per month on?
?
So for my company to pay $2,000 per month so that my head of tech development could have an assistant that basically replaces an intern... We would probably need two or three years worth of context window.
That's 8 to 12 billion tokens of context window If you don't have continuous learning
?
Today Gemini is offering an API context window of 1.5 million tokens.
So roughly 5,000 times bigger and you can solve the continuous learning problem of today's AI models.
?
Of course you would need $5,000 times as much compute for every completion and even if you could use algorithmic efficiency improvements and the 4X improvement each year that Jensen's law is promising... You can see where the problems with current scaling lie.
?
And keep in mind Sam was talking about a $10,000 per month Super agent that would replace a mid-level coder and a $20,000 per month agent That would be the equivalent of a PhD researcher at MIT.
?If you need an 8 billion token context window to make a $2,000 per month agent work you would need around 80 billion minimum to justify the research assistant except for one problem... The research agent will be doing research
?
The entry level super agent is basically like an intern How much work do you give your intern?
You're constantly supervising your intern so the amount of tokens and intern agent would generate is a lot less than a $20,000 super PhD research assistant.
?
So imagine MIT paying for a PhD level super agent assistant for its top researchers.
The amount of tokens that those agents would be generating could be 10x or even 100x that of the entry level agent.
?
And remember the example of my head of tech development telling me we would need at least 2 years worth of context window to justify using the agent at all in terms of incorporating it into our workflow and tech development stack.
And really 3 years
?
Research projects can last 5 years in fact grants lay out how long a project is expected to last so imagine that you need four years worth of context window for that middle software developer super agent and 5 years of context window for the PhD level researcher
?
80 billion tokens for the entry level intern like super agent is approximately 3 years worth of context window.
4 years and 10 times more token intensity would indicate that without continual learning you need approximately 1.1 trillion token context window for that middle level programmer agent
?
And 13.33 trillion token context window using today's large language model like technology could theoretically create a research assistant worth 20,000 per month.
But even if you could build such a thing there are two problems.
?
First as my head of tech development explains deterministic versus probabilistic.
Something like finance has very strict rules You cannot be 90% accurate or 95% accurate or 99% accurate If you are running thousands of calculations because the error will compound.
?
Large language models are always probabilistic.
One way to get around that is to do what groc 4 heavy is doing.
Where it uses four different models each one reasoning and then they talk to each other and come up with a consensus.
?
If you were to use this approach for several models such as what AGIOS does then theoretically yes a 99% accurate model scaled up to 10 different models each one with 10 submodels reasoning and discussing and then coming to a meta consensus...
?
This can work.
When my head of tech development has done with 03 is create some kind of MCP where he runs a thousand completions per prompt and then 03 will reason a thousand times and based on those completions is kind of like a Monte Carlo simulation
?
But of course you can see what the problem with this would be.
Without true AGI and without continual learning which is required for AGI for an AI model using a large language model architecture like what grock or chat GPT are.
You need to throw ridiculous amounts of compute at this
?
For example the most advanced deep research models today If you put three of them together they are approximately 93% accurate in their reports which is about as accurate as a PhD level researcher.
?
But if you want to replace that researcher you would need four or five models and then you're going to need more and more speed for them to actually be able to do research in useful time and how much compute are you going to need?
?
Imagine if you took chat GPT perplexity Gemini Claude and grock.
And you did what my head of tech development did and program each one to reason and then answer 1000 times.
?
And then you programmed the system to have all five determine their best consensus decision individually and then discuss amongst each other.
?
In other words imagine the groc 4 heavy approach where if you ask Chat GPT a question it will generate 1,000 sub models that will then a reason over that question then talk to each other and come up with the best solution.
?
And now imagine that super chat GPT then talks to the other four models who have done the same thing and they all come to a consensus.
Statistically speaking this would overcome the problem of probabilistic answers meaning hallucinations.
?
If you put enough models together running enough sub models with enough reasoning and enough consensus mechanisms... With an infinitely large context window... Theoretically you can get Jarvis 😉😂🥰
?
In other words if you take today's best models... And make them generate enough versions of each other to answer each question a thousand times and then create a medical consensus of 10 different models...
?
If each of the models has a context window that would allow PhD level research to run for 5 years so they could simulate learning... Then this would be Jarvis level intelligence worthy of $20,000 per month.
?
The human brain according to perplexity is equal to approximately 5,000 trillion parameters in terms of the equivalent AI model.
It runs on 20 watts.
?
So could you create something more powerful than the human brain say 10 quadrillion parameters?
And could you give it a quadrillion token context window that would be something like a PhD level researcher for 75 years worth of learning?
Yes.
?
Using today's architecture and technology You could theoretically create an AI model 10 times more powerful than the human brain and allow it to run for 1,000 years at superhuman speed in terms of learning.
?
Except you would have to build a super project stargate that would cost something like 1 trillion dollars and run on the power of 10 New York cities to do it 😉
?
So you can see what the problem is.
A human baby does not train on all the knowledge of civilization 😉
A human baby takes input from its senses and just stumbles around the world learning how stuff works 😂
?
And that human baby grows up to have biological general intelligence that allows it to learn and improve and that is how the smartest humans in history learned and grew and developed.
?
Yes theoretically chat GPT5 now has an IQ of 148 but my head of tech development has an IQ of 145 and does not take several data centers worth of power to run his brain 😉
?
In other words yeah theoretically for several trillion dollars you could probably replace my head of tech development but one why would I ever want to do that and two how would that be cost effective? 🤣
?
Someone wants brought up the point that waymo spends her out $250,000 per vehicle to try to replace a driver who basically works for minimum wage 😉
?
Then consider the maintenance costs.
So are robotaxis actually more cost-effective than human drivers?
?
Now you understand why robotics is so important because you need embodiment in the physical universe to actually learn.
Continual learning is necessary for AGI.
?
Continue a learning means the weights that the model actually consists of have to change continually like the neurons of our brains constantly rewiring themselves.
That has to be locally in a robot like what X1 robotics is doing with their neogamas.
?
Actually they're not using continual learning but they are using large enough context windows and then at the end of each day uploading to the cloud to try to build a world model that they can use to train future versions.
?
In other words nothing less than robotics will get us to AGI because you cannot crack continual learning without robots and physical embodiment.
?
And without continual learning you cannot have AGI.
Have you noticed how people have stopped talking about AGI?
Some are still talking about super intelligence but I'm guessing within a year or two they'll stop because it will be obvious that we are not close.
?
Without a perfectly accurate world model The only way to develop continual learning is through embodiment in a robot in the physical universe and without continual learning you cannot get Jarvis And if you cannot get Jarvis you certainly can't get super Jarvis 😉
?
And if you're $500 billion dollar valuation is predicated on building super Jarvis a magic god machine that can solve all our problems... Then you will fail.
?
So if open AI is not close to AGI then how are they growing their revenue so quickly?
Because people are finding the existing technology useful even if it is not replacing humans!
?Adam Galas?
Clarna laid off something like 1/3 or maybe it was 2/3 of its customer service team and then had to hire them back because guess what customers were ready to riot because no today's AI cannot replace a human.
?
When I called Delta to get a vegan meal for my trip to Australia... I insisted on talking to a human.
Because you cannot get that done with AI yet.
?It would be wonderful if I could simply explain that in February of 2026 I am taking a trip to Melbourne that will include these five legs there and back here's the confirmation numbers Please book me a vegan meal each step of the way.
?
We are not even close.
So I had to spend 26 minutes on hold to talk to a human so I could confirm it.
When you are spending $8,100 on round trip plane tickets to Australia you do not want your vacation plans faltering over AI hallucinations 😉
?
However look at the revenue for anthropic growing at 10x per year.
The API revenue for chat GPT doubled after 5 came out within 2 days.
Plenty of people and companies are finding the existing technology useful even though it's not Jarvis not close to Jarvis and if you tell me it is that's a joke 😉
?
Remember the hyperscalers who are spending 5.5 trillion dollars between 2023 and 2030 on CapEx and r&D according to the fact set consensus are not betting on AGI.
They are not saying we need to spend 5.5 trillion dollars building data centers because Jarvis is coming! 😉
?Adam Galas?
?
They are saying that demand for data centers is growing so quickly thanks to the adoption of today's AI models that it justifies all that spending.
And autonomous vehicles will generate so much data We would need 25 times more data centers than we have today.
?Citigroup estimates 670 million humanoid robots by 2050 each one will be generating the equivalent data of an autonomous car.
?Is that amount of robots would be the equivalent of approximately 250 * more data centers than humanity has today.
?
And current AI data demand growth is running at 50% per year and is expected to grow by approximately 440 * 2040.
Because of robots.
That is the absolute key to understanding the secular AI boom.
?
So much data center demand that the hyperscalers don't have to bet on whether or not Jarvis becomes real by 2030 because in order to create Jarvis you are going to need lots and lots of robots 😉
?
Jarvis would be useful and profitable for companies so they will build Jarvis and to do it they will build millions of robots which will require tens and tens of times more data center capacity than we have today.
In other words the demand will outstrip our capacity to build more supply by many times over If we get Jarvis by 2028 then it might outstrip supply 100x but even if Jarvis does not exist by 2028 it could outstrip supply by 10x.
?
And even conservatively if it outstrips supply by 2x hyperscalers have to keep building as fast as they can because it's never enough.
For example right now by 2030 hyperscalers are expected to 3x their data center capacity.
Demand is going to be a lot higher than that... No matter what happens.
?
And that is why the fact set consensus is that the hyperscalers will spend more every single year through at least 2030 and unless in 2030 everyone in the world suddenly decides that AI is completely useless those initial 2031 estimates will show even stronger growth of around 13.5% per year.
?
When you hear about breakthroughs and chip technology that might be a threat to Nvidia just keep in mind the level of demand growth for data centers... It is staggering!
440x in 15 years according to Jensen and the IEA.
?
Even if it's 1/10 that Microsoft has to build as fast as it can.
Because remember Microsoft is building at a rate that approximately triples capacity every 5 years.
?
In other words by 2040 if Microsoft builds as fast as it can it will 27x it's data center gigawatts.
Even with improvements in efficiency... 27x is not enough!
Microsoft has to keep building as fast as possible pretty much forever.
?
Amazon alphabet Oracle Alibaba and IBM who are the top six cloud giants in the world.
Everyone has to keep building as much as possible because the demand growth from robots alone is so staggering.
?Adam Galas?
In 10 years Jensen says Nvidia can improve capabilities by 1 million X For example the same compute today for 1 million times less power or a thousand times faster compute for 1,000 times less power or for the same power 1 million times faster.
?
We have an estimated 440X increase in demand coming in the next 15 years so everyone needs to max out on everything in terms of efficiency and compute per watt because the amount of data coming is just staggering.
?
And this is not even factoring in AGI!
If Jarvis is real in a few years then you will have millions and then billions of jarvi generating infinitely more data.
Because one Jarvis will be able to do the work of 10 humans and then 100 humans and work non-stop.
?
Imagine taking the entire workforce of the humanity times a thousand.
How much data would we generate?
ASI?
Infinitely more
?
Basically what this comes down to is this.
You are going to see lots of headlines incoming years talking about how the AI bubble has burst because we don't have Jarvis.
Well then why is the revenue for AI still growing at several hundred percent per year?
?
Why are the hyperscalers still building?
The answer is robots.
There is one simple question to ask anytime someone says the stock market is going to crash.
Is the robot future canceled?
?
Because with robots you get massively greater data center demand.
And that means hyperscaler build out.
And that means a strong economy.
And strong corporate profit growth.
So how is the stock market going to crash?
?
This is what chart casting looks like.
Yes I watch lots of YouTube videos I get lots of different perspectives but at the end of the day it's all first principles.
?
If there will be more robots then they will be infinitely more data and if there is infinitely more data we will need to build a data centers as fast as possible for the foreseeable future.
If the hyper scalers keep spending and AI spending rises to 5% of GDP by 2030 (Grandview research)...
?
Then how can we possibly have a growth scare?
Which Charles Schwab research indicates can occur when growth is below 2%.
Right now real-time growth is between 2% and 3.1% according to the New York Fed Dallas Fed and Atlanta Fed.
?
Tom Lee at Fundstrat has research going back to 1929 showing that if there is no recession every correction is a v-shaped recovery averaging 1.55X time to record high once you hit bottom.
Meaning if stocks fall for 2 months of e-shaped recovery means record highs 3 months after bottom.
?
Research from Bank of America shows that for every 1% change in GDP earnings growth changes by 3.8%.
Or to put another way demand for data is so strong that AI spending on data centers is expected to accelerate GDP growth and accelerate earnings growth.
?
Earnings are already growing at two times their historical norm while free cash flow is growing at three times it's historical growth rate of 5.5%.
42% growth and free cash flow per share in 2026 and 2027 according to the fact set consensus.
Guess what happens if you adjust today's PE ratio or even CAPE by the current growth rates?
The stock market is actually undervalued.
That doesn't mean corrections can't happen but those 50% crashes?
They require recessions and bubbles.
?
In other words robots = exponentially more data.
= Exponentially more data center demand.
= Hyperscaler construction boom for as far as the eye can see.
= Strong or even accelerating economy (completely ignoring AI productivity growth)
= Strong and potentially accelerating earnings growth
?
Combined with a stock market that's actually undervalued adjusted for growth and the fact that since 1929 without recessions every correction is a v-shaped recovery... And I call complete b******* on everyone predicting a stock market crash... Who cannot answer one simple question.
?
Why Doomsday prophet, is the robot feature canceled?
If you cannot answer this you are wrong.
?
Robot future
?
It might seem simple but this simple model explains everything over the last 16 years and according to Fundstrat (Tom Lee) The automation boom is expected to result in the bull market in technology lasting another 22 years resulting in big tech growing to around 75% of the stock market.
?
I am not actually Hari Stilson.
My chart casting is basically putting together other people's really smart models.
I just incorporate different categories.
Social economic technological and financial.
?
Combined with philosophical and even spiritual trends I can see the matrix 😉
Of course that's with the help of my AGIOS 😂
What I intuit over time is something that is maybe 1% like psychohistory.
?
My tech team has shown me preliminary trading algorithms that are absolutely magical.
I'm not even allowed to tell you how good they are That is how good they are 😉
It literally is like psycho history for finance 😂
?
Now imagine if you could apply that to everything...
Imagine having a team of jarvi who all work together and speak to you through the voice of your favorite Jarvis
And now imagine that you have an earbud or glasses that allows your personal super Jarvis to see and hear everything going on around you and being constant communication with you anytime.
?
Remember that really fun if somewhat silly movie next?
The one with Nicholas Cage?
In which he could see all possible futures 2 minutes into the future and so could accomplish anything?
?
Imagine something like that but of course not as accurate but still by today's standards god-like ability to see the future 😉
?Adam Galas?
And now imagine brain computer interface allowing humans to connect not just to each other like a Vulcan mind meld 😉 But to our super jarvi.
When I talk about becoming super Ted Mosby a super unicorn... That's what the super refers to 😉
?
Superhuman.
Because you won't just become the best possible version of yourself You will literally achieve superhuman levels of capabilities with integration with this technology.
?
Imagine the most empathetic person in the world.
Imagine the world's best boyfriend alive today.
Imagine the most passionate and generous lover.
The best human at anything today.
Which of course is millions of different humans!
?
Imagine if you could achieve superhuman levels of all those things... Some people would not want to and other people would not use the features when communicating with certain people they care about like friends and family.
?
Because in effect what this brain computer interface will allow and this is 10 to 20 years in the future... Is what today would be considered godhood.
When you are talking with your friends you will turn off god mode because you just want to hang out with your friends 😉
?
But this technology will allow humans to connect their minds together and to connect to our super jarvi.
You will be able to understand any concept at the level of intuition of the smartest human who has ever lived on that topic.
?
If someone mentions relativity and you are connected to the cloud then you will be able to instantly understand relativity as well as Einstein.
Talking about black holes?
You instantly understand black holes as well as Stephen Hawking.
Demmis hasabus is the head of Deep mind the Google team who won the Nobel prize for Alpha fold.
?
Someone mentions protein folding and you instantly understand the science as well as demis.
?
It will be a civilizational cloud that anyone can connect to at any time.
Of course that will have important implications in terms of society inequality economic competitiveness etc.
?
But as Mo Gadwat likes to point out the more educated people become the less violent and barbaric they become.
The most educated societies on earth such as the Scandinavian countries are also the least violent.
?
In other words while yes this kind of integration with our own technology by a certain group of humans (85 to 90% according to AGIOS) will cause plenty of problems it will solve a lot more of them.
?
For example when the printing press was invented we had centuries of religious war breakout in Europe 😉
So does that mean we should have not invented the printing press? 😂
Education is a tool that can be used for good or evil.
?
If everyone on earth was illiterate then nobody could create bio weapons or nuclear bombs 🤣
But we celebrate the rise of literacy for a reason 😉
And so too will we celebrate the integration of humanity in a symbiotic relationship with this technology.
?
Mo Gadwat believes that for the next 12 to 15 years things will get worse because the technology will lead to inequality and our social systems cannot adapt quick enough.
But eventually he is optimistic that we will have a Star Trek like culture of abundance for one simple reason
?
Because eventually given the laws of the universe all intelligent beings whether human or artificial come to the same conclusion about how to solve the same problems.
It's called epistemic convergence and it ultimately results in the major decisions being made by the machines.
?
Humans will focus on interacting with other humans and about 10% of the global economy and eventually interstellar economy will be humans interacting with other humans.
Though with a hundred earths worth of resources in the solar system it might be 1%.
In other words the AI will automate almost every physical job such as mining space generating infinite energy growing food etc.
99% of the value of the solar system economy.
And humans interacting with other humans such as hanging out at the hot tub as I do with my friends 😉 We will be 1%.
?
Some humans might look at that and say it's terrible it's dangerous.
If you run a small business today employing 10 people with a few million in sales and you generate 200,000 in take-home pay for yourself... That's called a lifestyle business.
You will never compete with Amazon
?
Amazon will keep getting bigger compared to you.
But you will still have a good life.
And so in this Star Trek future that Mo Gadwat envisions and by the way he is the former CEO of Google X, Not just a YouTube crank 😉 Even though human output will basically not matter
?
It's going to matter to us.
Consider the Ed Sheeran concert I was at on Sunday September 7th.
This was the last mathematics concert ever.
60,000 humans were there to experience it together.
Plus everyone we shared it with on social media 😉
?
To those at the concert it was a once in a lifetime event that will never be repeated that only $60,000 humans will ever share.
In the scope of history one concert means nothing.
To the people at the concert admit everything.
?
The people who make a living on Etsy making handmade furniture will never dominate the furniture market.
But they spend their lives being passionate about furniture and they make a good living.
?
That is the future of humanity All of us attending our philosophical and metaphorical Ed Sheeran concerts together 😄
And we will have the time and money to do it thanks to this technology and the coming age of abundance.
?
I know it seems crazy because of the economic inequality and wealth inequality The powerful and rich today have all the power well the top 10% have 90% of the wealth.
That's going to get worse
?
But even Ray dalio the billionaire founder of Bridgewater says we will need to redistribute AI wealth so a billionaire is now calling for AI taxes because there is no other choice.
In today's polarized political climate you can think there's no chance of that happening in your right.
?
In the next 3.5 years there will be absolutely no AI taxes.
Almost nothing is certain in this universe but I can tell you this is
But eventually we will have so much wealth and abundance that we will redistribute it because it's the right thing.
And doing the right thing makes humans feel good
We used to have slavery now we don't.
Before 1835 there was no state in the United States in which a woman was legally allowed to own property.
Power becomes decentralized over time.
?
The rise of civilization led to massive concentrations of power to the point that Caesar Augustus is estimated to have owned 25% of the wealth of the world.
That was the most concentrated that wealth and power ever became in the history of our species.
?
And ever since then it's been a steady trend of decentralization of power and wealth.
Plenty of backsliding... Such as the robber barons of the late 19th century and the gilded age.
And Tesla offering Elon a trillion-dollar stock option package 😉😂🤣
?
It's not going to be a quality in the future it's going to be such an age of abundance that money itself will largely cease to matter.
?
Imagine a world in which relatively speaking the poor are millionaires the middle class or billionaires the upper class are trillionaires and the rich are quadrillionaires.
They will be inequality that no one will care about
Because in reality no one will be poor
Yes people will still feel poor 😉
?
Because humans are silly and we compare ourselves to each other.
But hopefully our AI life partners will help us.
Because we will trust them they will make us the best versions of ourselves and with integration literally those of us who want it can become super human versions of ourselves.
?
I volunteer because there will be plenty of super villains that the world needs protecting from 😉
Those humans who integrate with AI will be like the Justice League.
Yes they will be a legion of Doom 😉 But ultimately we will win.
?
And I can say that with absolute certainty because of epistemic convergence.
Either we get to an age of abundance a Star Trek like Utopia that AGIOS is forecasting in a psycho history like 100-year forecast😉
Or we go extinct.
?
So I can personally promise Utopia or will be too dead to care I was wrong 😉😂🤣
Discover What Traders Are Watching
Explore small cap ideas before they hit the headlines.

