Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
On the minus side the FP ability of this chip is a joke, glass jaw almost, can't believe Nvidia are/were seriously thinking of building HPC chips out of it !
nexus 9 has nvidia tegra k1 (64 bit).
Anyway we'll see what's in Nexus 9 next week
But then I don't quite understand why one would use emulation of any particular SoC at all and not just use a native compiled "emulator" instead.
Oh, that's clearly Nvidia K1. It makes a lot of sense since it's going to be the most powerful SoC at that time, especially regarding the GPU. I think Google already hinted towards that in an earlier anouncement.
My point was actually that Android L seems to be fully ported to Intel x86 SoCs already and therefore is going to be available for Intel based tablets and smartphones right at release. That may give Intel customers some lead.
If the first Android L device is going to be built with 64 bit Nvidia K1, I would have expected the emulator to be based on ARM initially.
(Sorry, what follows is quite technical
I was mainly arguing against this:
a lot of swapping still needs to take place between the L1 cache and the registers
with extra registers internally, there is no need for L1 interaction as the extra registers define another memory hierarchy between named registers and L1. Just as updates done to a memory location can just stay in L1 and not written to memory, an update to a renamed register doesn't have to get propagated to L1 (nor memory (except any coherence issues of course))
Besides, Android L emulator currently only supports x86. That certainly is going to change by time of the release
iPhone6 review
http://www.anandtech.com/print/8554/the-iphone-6-review
There's a ton of additional data here to reevaluate the density claims on Apple's new A8.
...
for about 10-20% better performance
Paypal used an ARM-based HP server for HPC:
http://insidehpc.com/2014/09/hpc-paypal-leveraging-dsps-fraud-detection/
The strong point is the computational efficiency of the TI DSP, but it also shows that for that kind of workload using a "small" CPU is enough. Of course that remains a niche
They went from 1 billion transistors in A7 to 2 billion with 0.87x die size.
Or a metric s**t ton of on-chip SRAM. The A7 had a 3 MB block just for
giggles and the A8 could add a lot of dense transistors by bumping up
the size of this SRAM.
1. The claim a 25% performance gain over A7, even though the core frequency gain is from 1.3GHz to 1.4GHz - a 7% increase. The explanation was that an enhanced core design has made up the rest - but we have yet to see any official benchmarks. But what we have seen are a few leaked measurements that suggest it doesn't get anywhere close to 25%. So that's one assumption that may have been exaggerated.
Which is exactly what makes it a great system level benchmark, since it incorporates both hardware and software towards delivering performance.
The only benchmarks I have a problem with are those who happen to expose performance glass jaws that aren't visible in most software. Claiming that IE11 on 5Y70 is faster than Chrome on i7-4790 is not misleading, if the machine actually computes Java workloads faster.
To me, Geekbench is a broken benchmark until they take out the denormal computes as part of the FP workload. And yes, I understand that while GB 3.0 improves upon 2.0, there are still a large portion of the code using denormals in the latest release.
Sunspider:
Broadwell 5Y70: 111.9 ms
Apple iPad Air: 389.9 ms
Apple A8 (+1.25x): 311.9 ms
According to AnandTech, it's 20nm not 28nm.
AMD is the "Milli Vanalli" of microprocessors always has been, always will be, as thought fully put by Andy Grove.
Sooo if 64-bit Android performs better because of the replacement of Dalvik by ART this means that 64-bit enabled Silvermont will start make A9, A12, A15, A17 etc look anemic leaving the licensees with only the power hungry A57 and relatively wimpy A53 to compete.
Chipguy,
I would be very surprised if ARM distributed plain text VHDL for
its CPU cores to licensees. It is very likely encrypted.
Asus did not seem to have any trouble delivering a 2.33 GHz Moorefield phone.
The Moorefield Z35xx series is a quad-core, 22nm processor that uses Intel's own LTE tech. It can hit 2.3GHz, but we aren't sure what the frequency is in the Transformer Book V Phone.
The board is flanked by two batteries that give the device 32 hours of battery life.
But Intel could not estimate the battery life of the device.
Yeah, but If I recall correctly, 2.3 GHz is mentioned. Snapdragon 800 is usually clocked at 2.4 GHz - could be a hint towards Moorefield, but not necessarily.
Anyway, I doubt this is important for Intel. I don't give Tizen a big chance against Android - what's the benefit of yet another mobile OS?
Samsung's Tizen Smartphone Powered by Intel Processor
I've compared Google Octane, Sunspider, WebXPRT, and Kraken results per clock of Haswell against Apple's A7 cores.
We demonstrated SoFIA, our first integrated apps processor and baseband, after adding it to the roadmap late last year. We're on track to ship the 3G solution to OEMs in Q4 2014, with the LTE version following in the first half of 2015.
[talking about SoFIA]We then told you that in the second half of next year -- and again, we're debating whether it's the second half or the first quarter of 2016, but we'll move all of that internal on to 14-nanometers.
SpecInt is a different animal, testing huge tasks with typical workloads.
Silvermont does much better than the ARM competition in that (according to Intel, no other sources available).