Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Germany and NY may agree to provide the support in order to avoid catastrophic unemployment in Dresden and NY.
Catastrophic unemployment? How many people do you think work in a fab?
Unsuccessful tech companies often disappear. This is the law of nature
and always has been. From the debris of one failure come the seeds of
new ventures, potentially far more successful ones.
BTW, I worked on some chips that got fabbed by Chartered before they
were acquired by GF. They were very reasonable folks to deal with.
Afterwards it was dealing with real dicks - inflexible and they wanted
ridiculous wafer pricing compared to competitors. Clearly they didn't
want our business. We said adios for all future chips. F*** GF, good
riddance.
Hector Ruiz mugged them good and proper offloading his billion dollar lemons to them before it was apparent how worthless they truly were.
Then IBM came along and paid them to take their fabs off their hands.
The joke might be on big blue if GF can't afford to stay in business and
thus won't be there to churn out tiny quantities of z and Power chips.
Intel need protection from anti monopolists in the European Union and even at the USA.
Employing large number of “worker” is an antidote.
1) Intel is a public company responsible to enhancing its value for
its shareholders, not a private welfare system.
2) Doing as you suggested will not protect Intel from future money
grabs by greedy government agencies in cash strapped regions. It
just guarantees higher overhead and lower profits. INTC share price
would drop with such signs of obvious insanity by Intel management.
I really don't know where you get these crazy ideas about Intel
owing anything to poorly run has-been competitors like GF. Why
don't you shed tears about past Intel road-kill like TI, Motorola,
DEC, Cyrix, Transmeta etc?
Intel need additional Fabs to take advantage of the expected demand
Rubbish.
Intel has mothballed fabs that could be fully fitted and brought on-line
if necessary to meet demand.
Apple's A9X is a 147 mm2 device according to Chipworks
http://www.fool.com/investing/general/2015/11/27/inside-the-apple-inc-a9x-chip.aspx
Seems like an pretty expensive tablet chip. I guess that is why Apple
only sells it in a pretty expensive tablet.
ARM said recently their company website is run using ARM servers
Wow, their web site? I can imagine the huge traffic it must handle day
in and day out. The few times I tried to find a technical document on
their site it was as slow as molasses running uphill in January. Perhaps
that was something they should have kept to themselves.
Why only their web site? Why not their payroll? Their CRM? Their ERM?
Because non-trivial server software like SAP, Oracle, and thousands
of expensive applications that servers are bought to run don't have
ARM versions. Creating, verifying, and maintaining ARM versions can
easily run into 6 or 7 or more figures per application. Who is going to
invest that? The ISVs themselves won't for a non-existent customer
base. What ARM server chip/box vendor is going to throw billions down
on the table to try to jump start the market?
They are inferior in performance and performance/power and if they have any advantages in performance/price no-one is interested in the absolute difference in cost.
That is all important but the ultimate roadblock is software.
The vicious chicken and egg question of getting an installed base for a
server architecture without software simultaneously with trying to get
ISVs to create software versions for a server architecture without an
installed base.
This held Alpha back in its day and suppressed uptake of Itanium.
ARM server pom-pom shakers seem to think that once you got Linux
and gcc support for ARM64 then Bob's your uncle.
Someone has to make a $10B bet over 5 years that the world needs
ARM servers and provide 1000's of free boxes to many ISVs and PAY
them millions to create and market ARM versions of their applications.
Even then it would at best establish an ARM64 software base a tiny,
tiny fraction of x86's.
The fact that it uses a different mobile workload and a different PC workload is a problem.
The far greater sin is the fact that it generates scores that don't
obviously differentiate between the two versions of the benchmark.
One should be scaled by 1000 for example so it is obvious when
invalid comparisons are being made across benchmark versions.
Definitely not the ~1% that GB3 author claims. Ouch.
If 30% is a realistic hit between the two versions of the benchmark
then the GB3 author has a serious credibility problem to the point
of intellectual dishonesty.
And next he can unmask WebXPRT and SPECint2006 or does he think they are wonderful :)
SPEC CPU 2006 has a scripted run, verification, and reporting
environment along with standard defined workloads for profiling
and performance measurement runs. Submitted SPEC CPU scores
are independently reviewed for sanity and technical compliance
before being published at spec.org.
It is the product of professionals.
CPU2006's primary flaw is its age, not piss poor construction and
flawed methodology like GB.
If true, final stake through the heart of GB3 credibility as anything
other than a thinly disguised ARM marketing tool.
That is more the way EU rolls than Korea.
In the past Korea gets up in the business of U.S. Semicos when one
of its golden children gets into antitrust trouble in the U.S.
Any recent or pending U.S. fed action against Samsung or LG?
You got it.
So what was he holding in his hand?
An empty package of the type that will be used in two years?
(I have a rude answer that pays homage to the Godfather if you
didn't like the first one
And Intel process is superior to TSMC and or Samsung.
Intel's process is "superior" for
1) the type of products that Intel produces
2) the way that Intel designs its products
For someone else making different types of products designed in
different ways Intel's process may be considered difficult to use,
expensive, over-kill and/or feature poor.
There is something keeping Intel from winning these contracts and I don't think it was a question of price since I'm sure Intel would have loved this business.
Apple squeezes its suppliers like few others. For Intel selling x86
chips to Apple is ok because for each processor Intel sells to Apple
it sells 9 or 10 to someone else. That is a relatively power balanced
relationship. But competing with TSMC and Samsung for spinning
wafers on cost sounds like a recipe to lower overall GM for relatively
little to the top line and virtually nothing to the bottom line.
I have little doubt that Apple and Intel talk from time to time about
foundry services but I think the gulf between the two in both pricing
and terms and conditions remains huge.
Oh and the Surface 3 with keyboard and pen is $600 at Costco.com and it is a much better value in my opinion than the iPad pro although it is a smaller screen. It does have full windows and it runs everything I have thrown at it.
The iPad Pro is a great iOS phone in a 12" tablet format.
The Surface Pro is a great Windows laptop in a 12" tablet format.
It is clear which is more useful.
Atom is a joke. Don't even mention it in the same sentence as something like the A9X
You're right. The A9X burns far more power per core and likely has twice
the TDP at the socket level.
Very different design targets and constraints. Apple wanted to challenge
Core ULV in performance. Intel wanted to challenge ARMH cores in power.
I have been leaning towards buying a Macbook lately but crap like this:
"I think if you're looking at a PC, why would you buy a PC anymore?" Cook told The Telegraph during his tour. “The iPad Pro is a replacement for a notebook or a desktop for many, many people. They will start using it and conclude they no longer need to use anything else, other than their phones."
http://www.bizjournals.com/seattle/blog/techflash/2015/11/apple-ceo-microsofts-first-ever-laptop-is-deluded.html?ana=yahoo
makes me nervous about getting into the Mac world.
Tablets and phones, seriously? Apple expecting a world where no one
does any actual work any more?
A9X CPU also sustains its performance/clocks very nicely under heavy load:
A small phone and a large tablet have very different capabilities for
thermal dissipation without becoming uncomfortably warm for the user.
I wonder if the A9X result you linked to uses the small dataset version
or the large dataset version of GB3. Kind of problematic comparing the
performance of different systems when they could be using two different
versions of the benchmark but the was probably the benchmark author's
intention all along.
I remember when Intel hung around the high teens for what seemed
like an eternity. So yeah it does go somewhere for the patient.
When voice can easily drive word processing and financial tasks the pc as we know it will cease to exist.
How will this work in an open office environment?
Also I can use a keyboard and mouse all day but would probably
become hoarse after talking to my computer after an hour.
I also think you grossly underestimate the frustration level of
doing a lot of detailed work by voice. Think about how hard it
is telling a person how to do a detailed task they are unfamiliar
with. Now think about doing that when 5% of what you say is
grossly misheard or misinterpreted with no surrounding context
of understanding or common sense at all.
the person who led ARM to leadership in mobile. :)
You think that was a grand strategy?
ARM couldn't compete with MIPS, PowerPC, and SuperH in the medium
and high end embedded control market in the 90s so it went the route
of very simple licensed cores for low end applications - basically the
floor sweepings of the embedded control spectrum. The advantage of
scalar, low frequency designs is exceptionally low power consumption.
When cell phones came along ARM offered the lowest power cores at
the cheapest licensing terms. Phones then didn't need much CPU power
and low power consumption and low cost were the key decision factors.
ARM didn't win the mobile market, it fell into it by default.
I think the PC market is suffering from self-inflicted injuries as much as
competition from other form factors. MS's disastrous Win8 launch and
odious Win10 force-fed spyware/malware has done/is doing untold and
probably permanent damage to the PC's position in the consumer space.
I have been wanting to buy a laptop for myself since mid year but the
unappetizing choice of Macbook/Win8 thin&light/Win7 clunky dated biz
laptop bought sight unseen on-line keeps me kicking the decision down
the road month after month. On the bright side, MS's decision to stop
Win7 sales at the end of next year will force my hand and drive me to
unneeded build to order desktop and overpriced biz laptop purchases
in 2H16. Then it is just of matter of finding AV software strong enough
to keep Win10 upgrades at bay. :-/
Probably just a nuisance suit but with legal stuff you can never
be sure of the outcome.
http://legalnewsline.com/stories/510646458-amd-faces-suit-over-alleged-misrepresentation-of-new-cpu
In claiming that its new Bulldozer CPU had “8-cores,” which means it can perform eight calculations simultaneously, AMD allegedly tricked consumers into buying its Bulldozer processors by overstating the number of cores contained in the chips. Dickey alleges the Bulldozer chips functionally have only four cores—not eight, as advertised.
Yeah the memory system and the processors in the Power8 box are
burning a ridiculous amount of power for a departmental class server.
If that's the best RISC has got the ARMy have no chance apart from niches.
The ARM based entries should do a lot better than Power8 boxes in
power consumption but they will likely trail Xeon and Power8 badly
in performance and will probably trail Xeon in performance/Watt.
Given the zero market share starting point and abysmal economy of
scale, the ARM entries will compete with Xeon pricing only to the
extent that the ARM chip vendors and/or their OEM partners want to
dump truckloads of cash down the drain quarter after quarter in the
hopes of establishing a beachhead for ARM in the server market.
I have yet to see any rationale to ARM based servers beyond the
fact that they aren't x86, Power, SPARC, or Itanium. Novelty alone
isn't really a buying factor for IT professionals.
This is interesting:
We found that the POWER8 needs more than one thread to deliver good performance: with one thread we only achieve 62% of the performance of a Haswell core at the same speed. Using the mcpu=power8 compiler flag did little more than boost the performance by 1-3%, which is within the margin of error of this benchmark. So your (occassional?) single threaded code will fare badly on POWER8.
Like I said recently, it appears that Intel and Apple are the only CPU
designers that give a shit about single thread performance any more.
The simple reason is it is really, really hard and requires a combination
of excellence in microarchitecture, circuit design, and physical design.
If you can't compete in execution speed then the only course left is
throughput and you can either throw cores (AMD and ARM) or a high
level of SMT threading (IBM and Oracle) at the problem.
I guess it is for developers who are happier with x86 and there are quite a lot of embedded programmers like that.
The extremely rare application without a version for x86 likely has multiple
better alternatives already available on x86.
The vast majority of useful software is available for x86 and more than
likely x86 is the primary development platform that gets new versions
sooner, better support than all other architectures, has the most users,
and is the least buggy.
x86 is the lingua franca of general purpose computing.
Zen doing fine
Which means it is performing close to design goals.
The question is are those design goals good enough to raise AMD sales,
ASPs, and margins given the competitive landscape when it eventually
reaches the market.
It is definitely worth the money if you want the best performance bang for the buck.
My understanding is icc's biggest advantage over gcc and vs is its
ability to aggressively vectorize code and generate good SIMD FP
code.
If your code uses little or no FP or isn't computationally intensive
then there may be no real benefit to choosing icc.
As for the SpecINT2000 comparison that shows that A9 is probably more equivalent overall to a 2.0-2.3 GHz C2D which is still a fine achievement.
Sure but keep in mind the vast difference in memory system.
The A9 connects directly to a single DRAM with no expandability.
The C2D communicates to DRAM DIMMs over an FSB and through a chipset.
If the memory systems were exchanged it would be a very different
picture.
So would you start Core light with Skylake shrinks optimizing for power and roughly what specification would your Core heavy look like to offer enough differentiation ?
Developing a high performance microarchitecture is costly and time
consuming.
I would propose keeping Core heavy and light very similar as much as
possible. Same OOOE, integer ALU, load/store, and scalar FP capability.
Core heavy would have twice the SIMD FP execution capability and twice
as wide load/store paths in the core caches.
The key differences are frequency/voltage/power targets around which
circuits for Core heavy and Core light would be designed and optimized.
Core light would max out around 2.5 GHz and 2.5W. Core heavy would
max out around 4.5 GHz and 15W. The sweet spots would be around
2 GHz/1.2W and 4 GHz/12W respectively.
I would like to see even larger distinction between the server cores and the client Core processors
I don't the core usage model split is as simple as client vs server.
See my previous post.
IMO the more natural split is:
high frequency/power per core:
- desktop, workstation, mobile workstation/desktop replacement laptop,
HPC servers
moderate frequency/power per core:
- phone, tablet, mainstream laptop, scale-out servers, commercial servers
The cores should be very similar in uarch, perhaps logically different
only in FP SIMD execution width. The key difference is the target power
and frequency range driving circuit design and optimization.
I know you like to rag on Atom but it is still only a dual-issue cpu competing against triple-issue A57/A72 and hex-issue A9, there really is so much future potential scope to develop both Atom and Core
I think Intel should make Atom even leaner and lower power and redirect
it to replace the Quark (486? P5?) core for embedded control, IoT, and
also target cheap ass smart phones.
Intel should probably split Core into two similar uarchs, Core heavy and
Core light. Core heavy pushes single thread and FP performance and is
circuit optimized around higher frequency and power targets. This would
be used in high end laptop, desktop, workstation, and moderate core count
processors for low socket count servers for HPC and technical computing.
The Core light would have lighter FP resources and be circuit optimized
around moderate frequency and lower power targets. It would be used in
products for high end phones/phablets, tablets, low to medium end laptops,
and high core count chips for scale-out servers and high core count chips
for scale-up, high socket count, high RAS, commercial workload servers.
this must be at or better Alpha EV7 performance levels now
Keep in mind that the EV7 was made in a 180 nm process, a tad coarser
than 16 nm. The A9 also has higher memory bandwidth albeit with far less
expandability than the E7. Ah, the joy of many short wires and no package
crossings.
It seems like a pretty competently designed high performance general
purpose computing core. Looks like Apple is on the road to becoming
the only competitor to Intel for single thread performance now that AMD
is way beyond useless and IBM is going thread crazy like Oracle.
Intel has some options to increase non-MPU wafer starts and take semi
dollars from other companies.
The usage of eDRAM cache chips for graphics and L4 can be expanded
to include more mainstream, higher volume SKUs. This can further reduce
the market for discrete GPUs, particularly for laptops. This hurts both
Nvidia and AMD by reducing the market for GPUs as well as DRAM
vendors selling GDDRx style devices.
The question is how much extra tooling is needed to make 3DXP in an
MPU logic fab. If this is minimal then Intel could in theory rapidly ramp
up 3DXP production in-house and displace a good chunk of the DRAM
and flash going into servers today. That would probably hurt Samsung
the most.
Keep in mind these strategies must make sense for Intel from the big
picture point of view. They have to complement its MPU segmentation
strategy and preserve overall gross margin, not just bring in extra bucks.
A slash and burn strategy just to hurt competitors is probably not in the
short term or long term interest of shareholders.
Those are names of licensed cores. What *devices* beat Atom in an
industry standard benchmark?
As to the subject he brought up it does not matter if Atom is not the absolute performance leader in mobility
Can anyone point to an non-Apple ARM processor that beats Atom in
CPU performance?
If Atom's problem was lack of performance it should being doing worse
in tablets than in phones because the former has higher performance
requirements. The exact opposite is true. In fact the performance of
Atom is fine for tablets. In phones Atom is behind in the integration of
RF and system level features.
Atom is doing about as well in phones as ARM is doing in servers and PCs.
Atom is doing far better in tablets than ARM is doing in servers and PCs.
So who is behind in their new market penetration agenda?