Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Would probably need a DRAM cache for best performance.
You would need almost as much DRAM as you do now because Xpoint
has limited write cycles (10e6). You could probably execute code right
out of Xpoint and perhaps even have a massive, low update rate database
data in Xpoint (with appropriate new support in the OS to remap write-hot
DB pages across different physical Xpoint pages over time to level wear)
but not active program data space (stack, heap, paging regions, I/O
buffers etc). That needs the effectively infinite write capability of DRAM.
The best way to think of Xpoint is as superfast flash. It won't replace
DRAM in any major way.
Well, you have to be gullible enough to install something from a website.
A lot of Mac users apparently think their platform is invulnerable
to exploits. This overconfidence makes them ripe for the picking.
This is quite in contrast to experienced Windows users who are
very aware of malware and tend to be rather suspicious of such
bait.
There's also a lower risk of virus infections.
LOL, yeah sure.
http://www.theregister.co.uk/2015/08/04/apple_sudo_bypass_exploit_wild/
The amusing vulnerability in Apple's OS X that grants administrator-level access to anyone who asks is being exploited in the wild by malware.
Anyone logged in to a vulnerable OS X Mac, or any software running on it, can use the security hole to gain the same privileges as the powerful root user, meaning they can install new programs, change files, remove or add new users, wreck the system, and so on, at will.
Yet no one says a thing about a policy that would cause a PR s***storm
of unimaginable vehemence and anger if the incentive was reversed.
The most amazing thing about political correctness is how it wears its
bright mantle of hypocrisy without the slightest bit of embarrassment.
They didn't require that you use a Mac but most of the work is done in Unix so having an OS that natively runs it makes things easier.
Wouldn't Linux on PC would have been a better choice in this case?
One of my coworkers sons just got a new job and he has to get a Mac (employer kicks in $1,600).
A Windows PC was not an option let alone the preferred option? Seriously?
What field of work is he in? Marketing? Public relations?
It came with 8.1. We will wait to see how things shake out over the next
few weeks before upgrading to 10.
Bought a new computer today. Took my university bound daughter
shopping and we decided on a 15" ASUS notebook with an i5-5200u.
Would be a death knell for ARM vendors.
Why? The three biggest resistances for existing customers to switching
from ARM to x86 is software legacy, ARM's business model vs Intel's,
and development program inertia with current suppliers. XPoint changes
none of these.
Google has secretly released a new version of Glass
Does it automatically dial 911 for you if you are beaten senseless
by an angry mob for wearing them in inappropriate locations?
NVIDIA announces a a voluntary recall of its Shield 8-inch tablets
http://finance.yahoo.com/news/inplay-briefing-com-055139997.html#nvda
voluntary recall of its SHIELD 8-inch tablets that were sold between July 2014 and July 2015, which the company will replace. Co has determined that the battery in these tablets can overheat, posing a fire hazard
Surely they must be mistaken. Everyone knows ARM processors sip the
merest perceptible amount of power like an anorexic hummingbird.
Looks like Intel already has some ideas how to use this in servers
http://www.overclock3d.net/articles/cpu_mainboard/massive_leak_shows_details_on_skylake_xeon_chips/1
Skylake "Purley: Biggest platform advancement since Nehalem"
"All new memory architecture" "up to 4x capacity and lower cost than
DRAM" "500x faster than NAND" "persistent data"
The key seems to be something called "Apache Pass" which appears
to be a substitute or co-element with DDR4 DIMMs. Perhaps an Xpoint
based cache or scratchpad or simply DDR4 replacement. The later would
require memory management hooks in the OS to spread write intensity
around to even wear.
"So there will be two XPoint memory technologies. We'd imagine one would be cost-focussed and the other have a performance focus."
No surprise there at all.
Consider the basic 6T SRAM cell from the CMOS stone age.
Intel uses different variants of this cell in each process and these
can vary by more than 2:1 in area. There are trade-offs between cell
access time, array yield, noise margins, minimum operating voltage
and so on between cell designs.
Even with the SAME basic SRAM cell you can construct memory
blocks of very differing characteristics in terms of latency, bandwidth,
global bit density, yield, power efficiency and so on by varying the
sub-array size, presence and degree of array redundancy, peripheral
circuitry design, address path design, data path design etc.
Flash replacement for mobile - why? They need cheap more than anything else.
Apparently this stuff has very fast access time compared to
flash - in the ns range or close to SRAM.
The big advantage over flash for mobile and embedded control
is the ability for firmware to execute in place. That is, you could
run code right out of XPoint. Normally code is copied from flash
into DRAM to execute it. This would free up DRAM for data and
eliminate the copy time from program start-up latency.
3D XPoint will serve as both main memory and long-term storage.
No. The number of write cycles is limited (1e7 IIRC).
DRAM replacement requirements essentially unlimited writes.
Moody downgrades the crap out of AMD and it finishes up over 9%.
WTF! Folks think it is now in play?
https://www.moodys.com/research/Moodys-lowers-ratings-of-Advanced-Micro-Devices-CFR-to-Caa1--PR_330766?WT.mc_id=AM~WWFob29fRmluYW5jZV9TQl9SYXRpbmcgTmV3c19BbGxfRW5n~20150728_PR_330766
I still don't see the application, sorry guys.
Super fast disk replacement/disk caching for PC and server type applications.
Flash replacement for mobile/handheld applications.
10,000,000 write cycles
Uh-oh. Not suitable for main memory or cache.
Ideally this stuff looks like somewhat slow, non-volatile SRAM.
The four big issues are read latency, symmetry between read and
write operations, is read destructive, and area/pitch of the cell. If the
cell is tiny, slow, and requires analog-ish peripheral circuitry and/
or re-write after read then the memory architecture will likely need
to resemble DRAM rather than SRAM.
Wow. I like when someone puts their money where their mouth is. There's
a shortage of that around here these days. Good luck.
Interesting. I wonder what the non-volatile physical information storage
technique is - trapped charge, phase change, or something altogether
new.
Edit: looks like phase change.
"By contrast, 3D XPoint works by changing the properties of the material that makes up its memory cells to either having a high resistance to electricity to represent a one or a low resistance to represent a zero."
http://www.bbc.com/news/technology-33675734
I'm sure that Intel did some nice proprietary tricks with Crystal Well based on their unique process and requirements, but I'm not sure they could call that "disruptive".
I would look beyond the simple chip to chip interface.
If I had designed Crystalwell it would be paged at a more fine grained
level than commodity DRAM to support both lower random (row access)
latency and allow many more concurrent memory operations in flight per
device.
We already have HBM and HMC doing more or less what chipguy described
Hmmm. Could either HBM or HMC replace the proprietary interface uses
between its Crystalwell custom L4 DRAM and MPUs supporting Iris Pro?
Windows 8 is significantly faster than W7 and WXP so W10 might just be inheriting that speed.
An operating system is scaffolding to support execution of applications.
I am concerned about the performance of applications (and even that is
rare any more except for games and code that I write).
What exactly do you do that causes you to wait on the OS rather than
applications? In my case the only thing that bothered me about Windows
XP was boot time (power on to launching an application). Win7 on an i7
box with SSD is crazy fast to boot so I don't have a problem with "OS
speed".
I have a friend speculating that the SSDs would share the same bus interface as the DRAM, and dramatically increase the system performance.
I don't think SSDs can support the bandwidth or latency needed to make
good use of a DRAM style interface. At least flash based SSDs can't.
Is it announcement of disruptive memory technology
Probably DRAM based but with a new high bandwidth, low latency
interface designed for integrating DRAM into MCM based x86 SoC
products.
Auto-vectorize specint2006? It will be interesting to check Torvald's
favorite, the SPECint gcc result, when Intel submits some scores.
Also I realised that the TDP drop may be in part due to the VR moving back to the motherboard.
Nice catch. I hadn't considered that.
You can always take your desktop processor shopping to AMD or ARM if you aren't happy at chez Intel. :-P
Desktop Skylake has 11% better single thread SPECint performance than
Broadwell while reducing TDP from 84 to 65 W in the same 14 nm process.
Color me impressed.
Apple phones typically have much better resales value compared to other brands.
Basic phone, basic plan. Total cost minimized, all needs met.
Resale value? If I was looking for *investment* opportunities
it would not be in the area of consumer electronics stock.
The smartphone is the dominant computing platform.
I guess we will have to violently disagree on the definition
of computing.
I communicate with my smart phone. I compute with PCs (and
servers too when at work).
The ASP of iPhones was $660 in their most recent quarter.
That is an astoundingly high figure and was up from $560
in the previous results.
OTOH, I paid far, far, far less than that for my Moto g a
few months back, a unit that does everything I (and most
of the rest of the globe) need and more quite nicely.
Must be nice to have such a wealthy flock so eager to be
closely sheared on a regular basis as Apple does. Luxury
good marketing defies rational explanation.
I wonder how ORCL Sparc sales are doing. Do they break that out?
Oracle hardware sales are declining but on the face of it not as fast as
Power or Itanium. However Oracle hardware sales are a mix of SPARC
and x86 and they also do a lot of bundling expensive software licenses
into so-called engineered systems to keep system pricing high. Cut out
all the obfuscation and window dressing and SPARC hardware sales
are probably dropping as fast if not faster than Power or Itanium.
Sales generated from the communication sector accounted for as high as 62% of TSMC's total wafer revenues in the second quarter of 2015.
I bet they wished they has a slice of the server market. I mean a real
slice, not a microscopic breadcrumb in the form of spinning SPARC
wafers for Oracle.
LOL, someone has got to help pay for the next Greek bailout.
they are not doing anything with discrete graphics. Big GPUs really take a lot of Si. I don't know of any reason they should not be positioning for that market
Intel already owns 72.2% of the graphics market, up from 71.4%
the previous quarter and 68.5% a year ago.
http://jonpeddie.com/press-releases/details/overall-gpu-shipments-dropped-13-in-q12015-from-last-quarter
Discrete graphics market share has been shrinking for years and the
growing capabilities of integrated graphics and the trend to smaller,
lower power PCs will accelerate the trend.
I doubt that a consumer box will have more than 4 cores/8 threads soon.
Probably not. For the high end desktop/extreme enthusiast market Intel
will likely continue to re-badge 2s HPC Xeon parts with 6 or 8 cores.
Lightroom had all 8 Ivy Bridge cores heavily utilized with overall CPU utilization ranging from 50-75%.
We could definitely use a faster processor
The key thing to notice here is that your software is multithreaded and
using all four cores and both SMT threads per core. You are getting
stuff done much faster than if you had three idle cores and one core
running single thread software turbo-ing at 5 GHz consuming the entire
chip's power budget (and burning a hole in small corner of the die ).
The future is more cores, not more GHz.
Why the hell does Skylake clock below Haswell? Haswell was known to be bad at overclocking, Skylake is one node ahead, a full redesign, with latest and greatest Finfet tech and they can't clock it as high as Haswell? Wtf?
Look at it this way. Where is competition strongest? Mobile and dense
server. Where is market growth strongest? Mobile and dense server.
Power efficiency is key here. Where is competition weakest? Desktop
PC, desktop replacement laptop, and workstation. Where is market
growth weakest? Desktop PC, desktop replacement laptop, and work-
station. Ever higher clock rates are desirable here.
When you design a new x86 core into a new process you have to decide
on a power budget and an upper frequency target. The higher Fmax you
choose the more robust clock system, and somewhat deeper execution
pipeline and higher signal drive you need. That means more power/GHz.
If your competitive emphasis is on 5W mobile dual cores and 150W 24
core server chips you will choose a lower CPU Fmax than if your focus
is on 100W quad core desktop chips. This trade-off of lower Fmax for
better power efficiency actually gives you *higher* frequency for your
5W dual core and 150W 24 core at the cost of stagnation or even back
sliding for 100W quad core. This sucks for desktop gamers but it is the
only rational choice Intel can make.
And no, the market for high end desktop is not big enough to justify
a distinct, second x86 core implemented for uber clock frequency and
single thread performance. This market will still buy Intel regardless.
Hurd is a frat boy buddy of Larry. Oracle has always been very aggressive
with customers. It is a reflection of Larry's personality. Imagine what AMD
would have been like under JS if they had Intel's dominance back then.