Servers aren't bought to run SPECfp_rate, and they're not even bought to run workloads that even resemble SPECfp_rate.
You are not helping yourself with such nonsensical absolute claims like this.
My current employer just bought half a dozen single socket NGMA based systems for circuit simulation. When we run corner simulations we often spin off 32 or more simultaneous Eldo or Hspice runs across the server farm. In my experience over the last decade circuit simulation performance tracks SPECfp with a high degree of correlation across various x86 and RISC platforms. Corner sim runs are exactly analogous to SPECfp_rate - multiple copies of FP intensive programs running in parallel and competing for memory bandwidth.
So I have a first hand example that blows your claim right out of the water. Please be careful to note that I am not saying that my example was more than a niche compared to more mainstream IT infrastructure, web application etc server uses. But then again I am not the one making absolutist claims.
BTW, our current simulation servers are mostly Opteron based.