Typical AMD, bringing a pointy stick to a gunfight.
I would have said AMD brought a knife if it was an x86 chip but their ARM server chip has basically no software. Perhaps wet noodle would have be even more accurate.
AMD give Seattle a SPECIntRate of 80 which does not compare well against Avoton's 106 especially when it is also at higher power, 25W to 20W. A57 better have some killer apps it's good at otherwise AMD will struggle to get 0.25% of the market with this never mind 25% wink
The only thing that will give ARM servers a meaningful share is if a big datacenter (Facebook, Google, Microsoft, Amazon...) decides to invest significant NRE in order to port the meaningful apps over, and extract enough performance that they aren't going significantly backwards vs. Intel.
But note that this isn't a foregone answer. Microsoft created Windows RT in what was probably a hugely negative ROI equation, even before the sales figures proved it to be so. So I can see them doing it again to enable AMD across thousands of their own internal servers.
Of course, those willing to lose money to throw a bone out to ARM are likely to be far and few. It will take many years and billions of dollars to overcome the incumbency advantage that Intel has.
AMD give Seattle a SPECIntRate of 80 which does not compare well against Avoton's 106 especially when it is also at higher power, 25W to 20W. A57 better have some killer apps it's good at otherwise AMD will struggle to get 0.25% of the market with this never mind 25%.
Power aside (which definition of TDP has been used?), there is a clear advantage for Intel CPUs in SPEC benchmark suites: ICC.
If the ARM result is based on GCC compiled code, this would miss Intel's ICC0 advantage. GCC code usually is slower than ICC code. XScale is long gone, so there is no updated ICC I think. ;)
Using GCC on both platforms might change the relation a bit. And which compilers are being used for server applications?