Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Intel could sorely use some one with his tenacity and drive today.
BTW, there are a lot of active iOS devices that can't upgrade to the latest version as their hardware isn't supported.
LOL, that is one way to drive the user base upgrade cycle.
BTW, contrast iOS vs Windows.
Apple is trying to harness desire for the latest OS version to drive
sales of new hardware by limiting old hardware support by *new* OS.
Microsoft is trying harness desire for the latest Intel processors to
drive up Win 10 uptake by limiting new hardware support by *old* OS.
Anyway, I think TSMC gets too little credit around here and successes/strengths of theirs tend to be heavily downplayed by some.
I have designed into a number of TSMC process technologies over
the years. They were ok but not anywhere near leading edge for
their feature size. You go to TSMC because they are to fabs what
MacDonalds is to restaurants - ubiquitous, cheap, and you know
exactly what you are going to get. I haven't used them for a while
but for their very nature and business model - being usable by a
wide variety of customers, means they are a mushy general purpose
provider. That is perfectly ok for 999 out of 1000 customers but for
they guy trying to take on Intel or IBM on their turf, not so much.
Look at Oracle/Sun SPARC processors since they moved to TSMC.
Their processors reach ok frequencies but they burn crazy amounts
of power to do so because the process isn't optimized for large die
size devices with thousands of high frequency global signals.
Now that the foundries have the process lead and 64-bit ARM technology has a significant price/power/performance advantage over other architectures
Just say no to drugs.
Meanwhile in the real world:
http://www.idc.com/getdoc.jsp?containerId=prUS41076116
Demand for x86 servers improved in 4Q15 with revenues increasing 8.0% year
over year in the quarter to $12.5 billion worldwide as unit shipments increased 4.0%
...
Non-x86 servers experienced a revenue decline of -5.4% year over
year to $2.9 billion, representing 18.6% of quarterly server revenue
...
ARM server sales fell in 4Q15 compared to the same time in 2014
I have to admit the last bit surprised me. You actually have to have
sales to start with for them to be able to fall.
Oracle has raised the price of hardware and associated services high enough
to make it profitable but this is driving down sales. The real problem is that
each new generation of processor is ever more expensive to develop and
put into production. Rising costs meet dropping sales. This effect killed
Alpha, Itanium, server MIPS, desktop PowerPC etc. It will eventually kill
SPARC too.
IBM will certainly be last man standing with z and POWER but eventually
the same basic economic principle will overwhelm them too. POWER will
be dead and z will be a software compatibility layer for big Xeon boxes.
Oracle SPARC sales down 15% YoY
http://s1.q4cdn.com/289076952/files/doc_financials/3Q16/Q316_Form8K_Exhibit99-1_Earnings_Release_Tables_03112016.pdf
Would be interesting if the the communication is Cache-Coherent.
QPI is a coherent interface. Hard to see why they wouldn't make the
FPGA support coherency (or at least provide all necessary hooks to
allow the configurable logic to do so if desired).
it would be interesting to see when intel manages to implement an altera fpga on its own foundry and on the same dice as the cpu.
The first point is important (making the FPGA die in an Intel fab)
The second is far less clear. A Xeon is a pretty big die and so is a
useful FPGA. It may be more economical to keep the product MCM
based especially if 1) the FPGA die needs process options the Xeon
doesn't and vice versa, and/or 2) non FPGA Xeons continue to sell
in far higher numbers than the combo product.
Sad to report that my Skylake NUC died yesterday as it lost the ability to output video after only 3 weeks of 24/7 use. RMA underway. Some googling shows it seems I'm not alone.
Sounds like the little box isn't getting enough cooling to the processor.
Did earlier models have similar issues?
I have turned off automatic updates on my Win 7 machines to avoid this kind
of nonsense.
Can't wait to see what the new Skylake based Mac books look like.
Let's see how A9X holds up against Broadwell/Skylake both at 3.6 GHz.
Oh wait...
They won't replace PCs for everything but they will continue to knee-cap PC replacement cycles.
IMO Microsoft's insane antics with Windows is hurting PC sales more than
smart phones (otherwise Mac sales would be down too).
That being said, folks with disposable income to spend on electronics but
not wanting to touch their existing XP/Win7 PC might spend on a new smart
phone instead. In that sense smart phones are the beneficiary of PC sales
slump rather than the cause.
I think you entirely missed the point of my post.
It doesn't matter if it is ARM, x86, MIPS, POWER, SPARC or whatever.
It doesn't matter if we are talking about 14 nm CMOS, or 5 nm carbon
nanotubes.
The amount of complexity that can be packed into, and the amount of heat
that can be removed from, is far more limited in a phone format than in a
latop format (which is turn for more limted than even a small form factor
desktop format). That is simple math and physics.
If you want the convenience, bandwidth, and usability of a PC style user
interface then severely limiting system capability by plugging a phone
sized system into a docking unit to access full keyboard, mouse, and a
nice large display is IMO insane. Much better to have separate PC and
phone and design both to allow them to automagically stay in synch with
each other in the manner the user desires.
Perhaps a powerful smartphone docked into a KMV station can perform as a PC. At which point it becomes a PC.
Not even then. Sure, it will be capable of a much better user interface
but it will still be a POS PC.
Simple physics. A smart phone is a tiny device, the circuit board inside
is minuscule, has almost no component height capability, and extremely
limited thermal dissipation capacity. The screen, battery and RF stuff
take up a major portion of what little volume there is.
OTOH, a compact laptop, small form factor desktop, or all in one monitor
style PC has far more capability for component size and count, board area,
and power dissipation. A regular desktop is a room size supercomputer in
comparison for packing computational, graphics, and storage capability.
We are almost out of Moore's Law. What you can pack into a thin bar of
soap will always be a great compromise to what you can do in real PC
form factors.
IDC server data:
http://www.idc.com/getdoc.jsp?containerId=prUS41076116
highlights:
- all servers: +5.2% YoY by revenue, +3.8% YoY by units
- x86 servers: +8.0% YoY by revenue, +4.0% YoY by units
- non-x86 servers: -5.4% YoY by revenue
IDC finally talked specifically about ARM servers:
ARM server sales fell in 4Q15 compared to the same time in 2014, with HPE
Moonshot system deployments representing the largest single component.
Interesting IBM *grew* its server sales, all non-x86, by 8.9% even though
non-x86 server sales overall were down. IBM now has 75.8% of the non-x86
server market by revenue. Looks like Larry's plan to boost SPARC sales by
back stabbing Itanium has failed big time. I hope HP gets a big settlement.
http://www.law360.com/articles/696509/oracle-s-anti-slapp-bid-panned-by-court-in-4b-hp-row
Windows. x86. Joined at the hip, and always will be.
That's a one way connection.
There is a wide variety of OSes that run on x86. Practically anything
newer than 20 years was probably developed on x86 and x86 remains
its primary platform. Many legacy OSes a lot older than 20 years are
being/have been ported to x86 to remain relevant.
The x86 ISA is the standard architecture for general purpose computing
as well as experimental OS and software development.
So really not a lot of good choices.
I suspect a lot of potential PC sales over the last few years have
been lost by people wondering WTF is MS doing to Windows and
just sticking with their existing XP and Win7 systems and buying a
new smart phone or game console instead.
Intel really needs to invest into an insurance policy infrastructure
around a non-Windows PC capable of running Windows software
including games. Mac is too niche to pick up the slack and is just
as susceptible to an unpredictable partner.
BTW, what did you decide on for an upgrade if anything? You going with W7 or OSX?
I am waiting to see what the coming Skylake Mac books look like.
BTW I would have bought a Dell XPS13 last year if it was available
with a clean install of Win7 on it from the get go. The older Dell biz
models available with Win7 were/are just too decrepit going forward.
Windows 10 PCs 'Do More' Than Mac
Surveillance, spying, eavesdropping, monitoring, key logging...
arm on the server is going to be an overnight success after 5+ years of trying
Let's just set aside the minor question of x86's crushingly overwhelming
superiority in the depth and breadth of software for servers.
Let's just set aside the ridiculously huge advantage x86 server products
enjoy in economy of scale and distribution channel efficiency.
When do you think any ARM offering will be *as good* as an application
equivalent x86 server product from Intel?
If you can't even be as good as x86 let alone clearly superior then all of
those other factors will grind ARM wannabes into a fine, money losing
dust.
Oh no, Intel is behind market leader Qualcom in modem technology so it must
suck. After all Qualcom could match Intel in x86 MPU technology in a jiffy if
only it wanted to.
Semicos have areas of expertise. Trying to move into some other major semico's
primary reason for existence is never quick or easy.
MSFT gave away for free to current Windows PC owners
So many of whom rejected it MS turned to sleazy upgrade tricks to get
users to accidentally upgrade to it.
Why are they posting to youtube instead of spec.org?
With excellent leadership an aircraft carrier gets to where it needs
to be ahead of time and already turned into the wind.
Nah. Dude is 62 years old
Which means he survived and prospered through Intel's red in tooth and nail
days under Grove and Barrett.
I bet he is old school and is not too impressed by the vapidity and complacency
of "New Intel" under BK.
Since when is a process/manufacturing guy responsible for the release
schedule of the Nth product on a mature process?
Maybe he was sick and tired of trying to hire best candidates for his
groups only to be told by HR they weren't diverse enough.
There are a lot of people in the world who care about whether when they buy something they are making the world a worse place to live in
If you actually look at real effects underlying buzzword bingo you would
find most of these movements are either ineffective at best and in some
cases counter-productive to the stated goal at worse. When corporations
don these robes at best it is PR and at worse it is an cynical attempt to
separate self-righteous fools from their money with words rather than
better products.
Intel management seems to highly value shining its SJW credentials
since the current CEO came in. Is he trying to get party invites from the
in-people in Hollywood? Get invited to a sleepover at the white house?
My suspicion and fear as a long time Intel investor is this whole virtue
signalling PR BS Intel has been spinning out for the last few years is a
smokescreen to divert attention away the fact that the new generation of
upper management is incompetent, out of their depth, and unable to deal
with a rapidly changing competitive landscape. Products are late and off
target. Process development seems to be stumbling compared to the past.
There is no effort to diversify away from MS's increasingly erratic and
self-destructive stumbling about. Intel's past leaders faced far larger and
more formidable competitors and challenges and attacked straight on
with determination focussing the energy of the entire corporation into the
fight. The current CEO seems far more eager to get patted on the head
for being oh so politically correct than kick competitors's asses up and
down the block and deliver ever greater value to shareholders.
With red warning lights flashing so glaringly the only reason I haven't
dumped INTC (yet!) is the company still has incredibly deep institutional
knowledge as well as the most valuable franchise in semiconductors. I
also hold out hope that the BOD is watching BK carefully with a cold
hard dispassionate eye and is ready to take swift and decisive action if
BK & crew continue to keep worrying more about social justice press
releases than running a competitive multinational semico.
Intel also manufactures the world’s first commercially available “conflict-free” microprocessors.
Damn, what contentless BS oriented buzzword driven brainless entities
western societies and their corporations have become.
Are Intel processors also LGBT friendly and don't hurt porpoises? Do they
think of the children first? What about the size of their carbon footprint?
Do they check their silicon privilege and avoid appropriating the voice of
minorities like type III/V semiconductors?
Bring on a cleansing gamma ray burst from a nearby star. Nothing intelligent
down here any more to lose apparently.
This is all predicated on Moore's Law which is ending soon.
When the very last feature size shrink in silicon is done there will still be
many years of incremental advances and maturing left to be done just on
the process side. The design side has always process hopped and never
really settled down to fully exploiting any single process to its fullest
possible extent. The economics side is driven by economy of scale and
learning curves almost as much as Moore's law and there is also a lot of
corporate consolidation still to come.
Put all of those factors together and there is nothing about the current
semi tech trajectory to suggest we won't be able to insert significant
computational and connectivity capabilities into just about any object
for around a buck or two in the mid term future (5 years or less).
Like Intel, Nvidia is probably beating the overall PC market trend to a
large extent by taking market share out of AMD's hide.
The new digital "journalism" is like the old street performer organ grinder
with his monkey holding a tin cup for receiving pennies. The similarity is
the payment is still pennies, the difference is the "journalist" now has to
act the part of both the organ grinder and the monkey to put on the most
hideous spectacle possible to attract a few more pennies. With so much
monkey acting going on don't be surprised by what is being flung around.
That's why I would like to see Intel disclose transistor counts on the 22nm/14nm SoC products that target much lower Fmax & much lower Vmin than the comparable high performance notebook/desktop Haswell/Brodwell/Skylake parts.
The trick here is Intel's process technology has historically been developed
with high frequency, high performance MPU design in mind. As such the metal
stack was optimized for low RC delay rather than high interconnect density.
So even if you wanted to make a 2 GHz CPU core instead of the normal 4+
GHz core your effective transistor density may be constrained by interconnect
density.
Although Intel has described SoC variants of its processes that provide high
interconnect density it remains to be seen what shipping products use it.
Put another way, Intel's mainstream process is likely inefficient for building
dense, low frequency, GPU dominated SoCs. Conversely, matching Intel's
mainstream processors in frequency and performance using TSMC process
technology for example is extremely challenging to say the least.
As ARM tries to go up in performance and x86 tries to come down in area,
both are fighting out of their natural element and will have to adapt.
We also don't know the transistor counting methodology. A clock buffer
or global signal repeater may have a final stage composed of multiple
transistors fingers (e.g. a 10 um wide transistor may consist of 4 2.5 um
wide transistors connected in parallel for easier layout). CAD tools that
do LVS/ERC can count transistors in different ways depending on options.
An N finger transistor or N transistors connected in parallel may be
counted as one transistor (looking at connectivity) or N simple physical
transistors (counting gate rectangles in layout). Picking one methodology
over the other can significantly change the overall device transistor
count. This ambiguity is multiplied with finFET transistors that only
can be widened by instance repetition.
Which number is collected and reported? I guess that depends on what
narrative you are trying to support.
You do realize that designing to maximize a blend of 1) high Fmax, 2) low
Vmin, and 3) high yield means using larger than minimum size transistors
and thus reduces effective transistor density?
A really good smartphone like an iPhone is an indispensable companion.
I have a smart phone too. I use it for voice and text communication
and the odd game while waiting in line. It is indispensable only if I
am facing an unavoidable need of being available for communication
while on the move. I like the feeling of being off the grid otherwise
and leave it at home whenever possible.
My smart phone has not reduced my need for my PCs one iota. There
is nothing I do computing wise I wouldn't rather do on my PCs. I would
far prefer to write an email on my PC than send a text on my phone
because the user interface (full keyboard plus mouse) is far faster and
less error prone than the touch screen on my phone. When I am at
home or office the phone sits unused near my PC unless I receive a
call or text. Create or review a document or spreadsheet or write code
or control lab instruments with a phone? May be technically possible
but utter and sheer lunacy to do so if a PC is available.
I think the term "CPU" is used in this slide to denote an MPU product SKU,
not a CPU core in the technical sense. An x86 CPU core is a black box, the
heart of darkness, pandora's box.
For very high volume applications (or an application that's extremely high value to the customer), Intel could actually build in accelerators in silicon (more efficient than FPGA implementation).
A custom silicon implementation of any complex function is way more compact
and efficient than dropping it into an on-chip FPGA resource.
But you had better be damn sure what you are implementing is a fully settled
standard. The FPGA based solution is far more forgiving in that respect.
That graph has an implicit built-in assumption that you keep the
product die area relative constant and put the extra transistors
from shrinks to good use.
If you keep scaling down existing products because you can't afford
to power more transistors or can't put them to effective use then
die size shrinkage will cause non-scaling portions like I/O pads
and die seal region etc to flatten the line towards horizontal.
So far integrated GPUs are able to gobble up any number of extra
transistors for some delivered utility but eventually bandwidth
starvation will limit marginal benefit. More CPU cores is a too
simplistic path because of device level power budget limitations.
To run 8 cores instead of 4 in the same power budget for example
you will have to drop clock frequency by roughly 25%. For a lot
of software the loss of single thread performance will outweigh
higher potential multi-threaded throughput.
You misspelled sceptical.
The foundries are closing the gap with Intel
Keep in mind what a process is called versus its actual characteristics is
increasingly disconnected. Foundries seem to be closing the gap with Intel
in naming artistic license more than actual silicon composition/capabilities.