Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
If Global Foundries can sell the process to others then volume will go up and IBM will get cheaper chips.
1) I am not sure if GF can make chips in IBM's processes for other
companies. IBM is still doing the R&D work and probably wants the
technology kept exclusive.
2) Even if IBM doesn't object I think GF would find few if any takers.
The technology is very expensive, has very long fab time, and I doubt
much work has been done to optimize yield. Not many fabless semicos
can sell working devices for $2k to $100k+ each like IBM can in systems
subsidized by plush software and services margins.
Chipguy, what do you think the implications of IBM selling its fabs to Global Foundries will be?
Part of the agreement (and a big reason for the price IBM had to pay) was
that GF agreed to support low volume manufacturing in IBM's boutique SOI/
eDRAM high performance process technologies for years into the future. It is
more a bookkeeping exercise than a change in basic underlying (un)economics.
I don't think this change is much more than a signpost along the inexorable
journey in the disappearance of low volume high performance non-x86 MPUs.
Market forces and economies of scale have been shrinking their market
share since the Pentium Pro appeared 20 years ago while the cost of
bringing to market high end processors and systems keep rising.
Also keep in mind that platforms are more resilient than silicon. I expect
system z platforms will be sold long after IBM exits processor development
altogether. It will simply be an extremely expensive software compatibility
layer sitting on top of Xeon hardware. Unisys has already gone down this
path with its mainframes and HP is heading there with NonSTOP and VMS.
OTOH there is really little reason for Power/UNIX to exist when there is
x86/Linux available.
IBM products are a key competitor to Intel processors in the server market.
However IBM has a small market share and a costly business model. Any
development that impacts IBM's financials could seriously affect its ability
to sustain future product development.
Over the years a few people on this board have wondered how IBM held
up its financial numbers while its individuals business segments either
stagnated or collapsed.
Apparently the SEC has become interested in this too.
http://www.theregister.co.uk/2015/10/27/ibm_being_probed_by_sec/
IBM's stock price has taken a hit after the tech titan revealed it is under investigation by the US Securities and Exchange Commission (SEC).
Big Blue said in its latest 10-Q filing that for the past two months, the financial watchdog has been looking into its accounting practices.
"In August 2015, IBM learned that the SEC is conducting an investigation relating to revenue recognition with respect to the accounting treatment of certain transactions in the US, UK, and Ireland," IBM admitted in the filing.
"The company is cooperating with the SEC in this matter."
everybody thought tablets like the iPad were going to kill personal computers.
Say what?
A lot of ANALysts touted that but people who use computers for actual
work instead of as TVs with keyboards tended to be a lot more dubious.
That's only 1/7th but people turn their phones over a lot faster than their PCs.
It's pretty hard to drop your PC into a toilet.
Then there is the growing issue of designating a process as X-nm is ever
more a marketing exercise as each generation passes than a hard objective
technical designation. Just because a vendor is keeping up in process
naming does not mean they are keeping up in process physical capabilities,
manufacturability, or cost.
But hey, you're right, we should only talk about Intel and the weather here on this board, not about Samsung
This statement reflects either illiteracy or intellectual dishonesty.
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=117880086
"Ok. Bring up Samsung when there is news that is relevant."
Are you working for Intel?
No. Never have.
Do you really believe that posting stuff on this board will move the needle for companies that weigh in the hundreds of billion dollars category?
No.
The question is whether *you* consciously or otherwise believe it
or are otherwise psychologically compelled to hype Samsung here.
This is still a board for investors, not cheerleaders!
Ok. Bring up Samsung when there is news that is relevant.
Touting them post after post, week after week, repeating the same
specious arguments over and over again may be therapeutic to you
but is tiresome to likely nearly everyone else here.
You remember the expression about what opinions are like and how
everyone has one? Maybe you can stand up straight for a while every
so often just to give us a break.
Why are you spending so much time and energy touting Samsung here?
Do you work for them? Do you have an interest in trying to drive up its
share price? Do you have an interest trying to scare off Intel investors
or lower its share price?
It is all getting very tiresome. Go start a Samsung board.
I agree with the previous poster's point about the problematic nature
of process tech comparisons based on single point benchmarks so
many levels removed from raw hardware.
I also have 30+ years of semiconductor experience and continue to
add to it each passing day.
I'd write more but I have to review and sign off a composite mask
set for 8 different test chips being taped out today.
Mainframe sales are moderately up, Power sales are slightly down.
It is rather different to quote a third party's prediction about the future
markets than make the prediction yourself. The standard disclaimers
are no doubt present too.
Nike thinks they will be selling $50 billion worth of sneakers by 2020 maybe they will be all connected.
So does that mean the left foot will know what the right foot is doing?
I don't care if the number comes from Krzanich and is blessed by the pope.
It just doesn't pass even a cursory sniff test.
Given that total annual semiconductor sales are ~$350B I have a
lot of trouble comprehending a $6200B IoT market in ten years.
This dream smells a lot like an organic by-product of male bovine.
Couldn't you make an exception?
It is only four times a year. Plus the odd warning I guess.
"We're really excited ..."
Standing on a narrow ledge near the top of a 50 story building in
a strong gale will do that.
Now there is only one left really outspending: Samsung. I am really bullish about that company. They have all they need to compete with Intel/Micron for the final pole position in semiconductors. I suspect Intel not to have a lead anymore at 10nm.
I do not share your pessimism.
Before you write Intel off keep in mind that Samsung has to divide its
process R&D and capex three ways - DRAM, flash, and logic. Intel only
has to push logic. Micron pushes the other two and carries the risk and
the costs associated with them.
Also keep in mind who Samsung has as its dominant logic customer. Intel
makes delicious margins on most of its chips. Dealing with big fruitco, I
bet Samsung execs feel lucky just retaining their family jewels after every
negotiation.
AMD compute and graphics revenue down 46% YoY
http://finance.yahoo.com/news/amd-reports-2015-third-quarter-201500832.html
- $65 million inventory write-down primarily of older-generation APUs
I wonder if google has a troll-to-English translation option.
Has MS found a burial site yet for its unsold ARM/RT Surfaces?
It basically covers a table based prediction scheme for speculating about
data dependencies between loads and stores before execution.
Apple (AAPL) Stock Retreats After Patent Violation, Could Face Significant Damages
http://www.thestreet.com/story/13323525/1/apple-aapl-stock-retreats-after-patent-violation-could-face-significant-damages.html?puc=yahoo&cm_ven=YAHOO
Apple shares are declining 0.63% to $111.09 on Wednesday after a jury in Madison, WI found that the tech giant violated a patent held by the University of Wisconsin-Madison's licensing arm, Reuters reports.
As a result, the company could be liable for up to $862 million in damages.
Specifically, the jury found that the chips inside the company's iPhone and iPad infringes the university's patent.
The Wisconsin Alumni Research Foundation (WARF) first sued Apple in January 2014, and the Cupertino, CA-based company denied the claims, ZDnet.com noted.
Then, WARF sued again last month, this time focusing on Apple's most recent chips used in the new iPhone 6S and 6S Plus, along with the iPad Pro, Reuters said.
I am pretty sure Intel paid these guys off a while back. Not sure
about AMD but that may be a case of getting blood from a stone.
Another loss for AMD: HSA, Radeon vet Phil Rogers joins Nvidia
http://arstechnica.com/gadgets/2015/10/another-loss-for-amd-as-hsa-and-radeon-veteran-phil-rogers-joins-nvidia/
With only two major players in the dedicated GPU industry, it's not unusual for the employees of Nvidia and AMD to partake in the occasional bout of musical ship jumping. But every now then there's a big move, and unfortunately for the embattled AMD, today it's the one losing the talent. Phil Rogers, AMD Corporate Fellow and president of the Heterogeneous System Architecture (HSA) Foundation, has left the company after 21 years to join rival Nvidia as its "Compute Server Architect."
Rogers is the second high-profile loss for AMD in as many months, with ace CPU architect Jim Keller (regarded by many as the brains behind the upcoming Zen CPU microarchitecture) having recently left the company. 500 other staff are expected to be let go as the company struggles to return to profitability.
You know what they say about rats and sinking ships.
Intel said its client sales were down 7% YoY while Gartner said PC
sales were down 7.7% and IDC said PC sales were down 10.8%.
The only way to reconcile Intel and Gartner is that AMD took a
beating and the only way to reconcile Intel and IDC is that AMD
took a severe beating. I guess we can leave it to AMD to report
which kind of beating it took in a week or so.
That can't be right...
The bashers assure us that PCs are dead, dead, dead and ARM is secretly
eating Intel's lunch in servers.
i wonder when we will see altera accelerators in intel chips.
When there is market pull for them.
IBM used to offer programmable microcode capability in some models
of S/370 to allow users to create their own instructions to speed up
their critical applications. They dropped it because of lack of use and
technical support issues.
Some of the same difficulties that separate theory from practicality
in the IBM example apply to the question of FPGA accelerators. We
will have to wait and see if the outcome here is different.
In the case of Qualcomm they have the double whammy of spending on
a device with two theoretical markets, ARM server, and FPGA add ons.
But Qualcomm has lots of money from patent licensing and everyone
needs a hobby. What's the harm?
"The final production processors will have more cores."
http://www.theregister.co.uk/2015/10/08/qualcomm_arm_server_chips/
They having problems making a working full scale product or
realized too late that 24 ARM cores just won't be competitive?
Not exactly confidence inspiring.
how much improvement are you expecting from Zen?
On one hand it is a clean break from the past. OTOH AMD has limited
design resources and relies on external foundry process technology.
I expect it to have a lot better single thread performance and "IPC"
then current AMD products but fall short of Haswell at similar clock
frequency and it will clock slower at the same power. It will still
be well behind Intel in perf/Watt. AMD will probably market 8 core
products against Intel 4 core chips and be pretty competitive in chip
level throughput but at a substantial cost in higher power. I get the
sense that Zen is oriented at traditional server and high end desktop
so it probably won't scale down below 35W very gracefully at all so
it likely won't help much in mobile.
In the end AMD will have to sell based on price. It is a matter of
how much more can they sell it for than their current products vs
how much more does it cost to manufacture than current products.
The AMD system architecture is a legacy design going back to the
first K8s. Their cache architecture is what they can design in the
process technology available to them. The two-fer of two 8 core
chips in one package with the disadvantages of that (as the paper
said they are already taking the hit from being a two way system
in a single package) is the result of chronically not being able to
design a decent follow-on to the K8 core.
I don't think it is an issue of erroneously thinking this was the
right way to go but rather doing the best they could with what
they had available to them. It's hard to compete against someone
who is at the top of their game and has higher R&D spending
than you have total revenue. :-O
Interesting paper comparing Sandy Bridge and Bulldozer cache and
memory architectures and measured performance.
http://www.noahmendelsohn.com/COMP40Slides/2014%20Memory%20Performance%20Paper.pdf
From their conclusions section:
We find the AMD Bulldozer architecture with its module concept
and two independent dies per socket to be much more complex
than Intel’s Sandy Bridge design, creating a vast amount of different
latency and bandwidth numbers. While latency figures are
mostly in line with our expectations, several observed bandwidths
are surprisingly low. The accumulated L3 cache bandwidth of a
full Bulldozer die (8 cores) is close to the L3 bandwidth of a single
Sandy Bridge core. The L3 cache bandwidth also scales better with
the core count on the Intel system. Although AMD’s L2 cache is
very large, its performance is only on par with Intel’s L3 cache in
a per-core comparison. The accumulated L3 bandwidth of a Bulldozer
socket exceeds the main memory bandwidth only by a factor
of two, compared to more than a factor of five on the Intel system.
This is even more noteworthy knowing that the Sandy Bridge system
is also superior in terms of main memory bandwidth per socket.
While both interconnect technologies fail to fully utilize the memory
bandwidth of other NUMA nodes, the HyperTransport results
are much more disappointing. The transfer rate between the sockets
in the Intel system is four times higher than the transfer rate
between the two dies within the AMD processor and more than ten
times more effective than some of the two-hop connections in the
AMD topology. Finally, on-die latencies are much better on Sandy
Bridge, mostly due to the inclusive L3 cache design.
Overall, we attribute a significant portion of Intel’s current advantages
regarding application-level per-socket performance to the
differences in the memory hierarchy. The L3 cache provides a
high bandwidth per core that also scales linearly with the amount
of cores. The QuickPath interconnect also provides a relatively
high bandwidth for remote memory accesses. In contrast, AMD’s
memory subsystem severely limits the achievable processing power
of the compute units in memory-intensive applications. Furthermore,
parallel programs need to be exceedingly NUMA-conform to
avoid being limited by the unexpectedly low HyperTransport performance
for certain connections.
There is a lot more non-OPEC oil around now but it is medium to high
cost production. OPEC can't choke supply and raise prices sharply like
in the 70s but they can overproduce and force prices down and put the
hurt on the new players until many of them fold or curtail investment.
Saudi production costs are still very low.
apple has set itself up for a class action suit
Oh please. No matter what Apple hocks up and spits in the face of its
users most of them mindlessly say thank you sir may I have another.
It is basically Saudi trying to keep Iran down and the rest of the world's
oil producers are taking in the uh, neck as collateral damage.
Obviously expecting AMD to hit it out of the park with Zen.
I wonder if they have checked for change under the sofa cushions in
the lobby waiting area yet?
Let me guess... Your pet processor sucks at crypto.
No, x86 has clear and absolute leadership in scalar integer performance
including crypto. Write any crypto algorithm you want in C and run it on
the fastest x86 and the fastest ARM and see who wins and by how much.
However the problem is when certain processors include special purpose
functional units in silicon (a little silicon easily beats software running on
the fastest processor for twisty bit swizzling and XORfests like crypto)
and a certain benchmark presenting itself as a CPU benchmark uses that
special purpose hardware in some cases and mixes the results in with real
CPU benchmark results to present a single composite score.
Funny how often the same people who criticize SPEC CPU for being
a compiler optimization target see no problem with a so-called CPU
benchmark calling up special purpose functional units on some chips
to greatly goose the results. Don't criticize a mote in someone else's
eye while ignoring the redwood tree log in your own.
Thank you for your detailed and interesting response. I think on the
technical side we are pretty much in agreement. Our difference is
in interpretation of the shortcomings of Geekbench in methodology
and reporting, their evolution across three versions, the nature of
those shortcomings, and their implications for its exploitation by
certain commercial interests.
As a benchmarking engineer working in a company that bought a license, I have
access to the source code wink
Awesome dude. Are you going to
1) technically justify all the obvious shortcomings of Geekbench,
2) technically criticize all the obvious shortcomings of Geekbench,
3) or simply give them a pass as "boys being boys"?
Many contemporary ARM chips and the systems they go into could easily
run SPEC CPU 2006 with modest effort. The fact that ARM and all its
licensees and allies have failed to submit a single run despite having
explicit desire for roles in desktop and server computing and instead
relied on Geekbench (or worse!) for marketing purposes suggests the
difference between poor benchmarking and best currently available
benchmarking is fair from an academic anomaly but rather a strategy
serving important corporate interests by exaggerating the capabilities
of a certain family of chips.
BTW, you failed to address any of my other points in my previous
post. Surely as a "benchmarking engineer" you have some valuable
and insightful comments to add in support or in opposition?