Re: Let's see, the idle power difference is ~ 73W / system.
The peak power difference is ~ 96W / system.
This is obviously without DBS capability. I can't imagine idle power difference to be more than ~10W with DBS.
Re: assuming peak for 4 hours and idle for 20 hours
This is another bogus assumption, at least as far as clients go. You might have a point when it comes to servers (actual measurements put server usage in the mid-30s for percentage at peak during the 24 hour day), but when it comes to clients, this will be a lot lower. For one thing, employees are at home 8-16 hours every day (depending on how hard they're worked, ranging from an 8 hour work day to a 16 hour work day with no meal breaks, I seriously doubt you could argue more than that). When they are working, 90% of the work is office stuff with 10% CPU usage at most. I'd be surprised if daily usage is more than 1% for most clients. Of course, end-users absolutely want the performance when they need it - they just very rarely need it, when you think of things in terms of seconds of processing power in a day.
That's why I think the idle case is going to be the most common for clients. When scaled back or in CPU halt/grant power states, the additional power of a Prescott CPU for 20 clients will be a very small fraction of your estimate.