Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
I was not talking about *instantaneous* power draw. I was talking about power draw over a period of time
Sorry but you said "instantaneous power" so I thought you meant instaneous power. I should
have first consulted my spokeshave prevarication magic decoder ring to establish that you
were really taking about energy before responding to yout post.
At any rate, this is not what decoupling capacitors are for. It may well be
an ancillary function to fill in the power gaps created by transients, but the
primary function of decaps is to isolate the device from the power bus noise.
Wrong. The purpose of decoupling caps to shield the device being powered from
the effect of the power supply inductance which would otherwise cause wild transient
in local voltage in response to large di/dt events in the device iteself. The noise that
would otherwise cause failure is locally generated, not brought in from the power bus.
Here's a reference that may be suitable for your apparent level of understanding of high
speed microelectronics. It even has a description of typical uP current transient:
http://www.paktron.com/techarticles/hi_speed_circuit/high_speed.html
"The inductance of a system (caused by cables, power planes, etc.) slows a power supply's
ability to respond to these rapidly changing current requirements. Both bulk and high frequency
bypass capacitors are required because of the relatively slow speed at which a power supply
(or a DC-to-DC converter) can react. For example, a microprocessor's current transients are on
the order of 1-20 ns while a typical voltage converter has a reaction time of 1-100 µs. Properly
selected bulk capacitors will slow the transient requirement seen by the power source to a rate
that the power source can supply by furnishing energy to the system until the power source can
react to the demand."
Hmmm, 1 to 20 ns. No mention of shielding the uP from power bus noise either. Whatever
could this mean?, LOL.
Quite a few critics of Intel's TDP numbers like to ignore them and multiply Icc_max and Vcc_max
You have found a second flaw in this approach (ie. the load line) and I thank you for providing
me even more ammunition to debunk this myth.
So, then, it is your contention that the loadline is flawed and is a myth?
From this comment I can only presume you are deliberately being obtuse.
The Arecibo dish in Puerto Rico is physically capable of communicating with identical or better
technology anywhere in our galaxy. Knowing where to aim and when to listen and on what band
is another matter altogether but it physically can be done.
Yawn, who'd ever not expect *inductance* to play a role in a power regulation
and distribution system handling huge di/dt rates.
IRC Intel maxed out at about 500 billion cap. How in the heck great a company Intel must be, that
can grow from there.
According to this:
http://www.siliconstrategies.com/story/OEG20030311S0010
Intel's $23.7B in sales in 2002 represented only 15.2% of the world wide semiconductor
market. Even with the semi market temporaily stalled there is plenty of room to grow by
taking it out of other semico's hide.
The obvious place to start are non-x86 processors at the high and low end. Intel is grinding
away at those with Xscale and IPF. There is also telecom and wireless and Intel is aiming
there too. It won't be easy but he-who-has-fabs has a built in advantage and the inexorable
trend towards more expensive fabs means the number of JSIII's "real men" is shrinking
year after year. Perhaps one day the only three players with fabs will be Intel, IBM, and a
Eurasia foundry consortium. IBM, I'm not so sure about
I would argue that if it is in the DC Specifications section, then
Icc_max is probably defined as a DC characteristic.
There is no AC section. Besides the loadline is described as including transient limits.
Incidentally, 99 Watts for 10 clock cycles represents an instantaneous power draw of
about 3E-07W-s, or 0.003 milliWatt-seconds. There is no way that this would even be
considered in power supply design.
Watt-seconds, or more properly, Joules, are units of energy, not instantaneous power
draw. I would have thought a self-proclaimed PhD in physics would appreciate the
difference between power and energy.
The critical factors for an uP power supply are its impedance and response to transients,
i.e. not deviating outside the target voltage limits in response to a maximal di/dt event. A
VRM can't respond in 10 clock cycles as you said, that's what decoupling capacitors are
for. And the amount and nature of the decoupling capacitance is always considered as
part of an active power supply design since it affects transient response and even loop
stability.
Despite your penchant for framing my argument for me, I never claimed that maximum
power *should* be considered in thermal design.
Good, that was the point I was trying to bring out. Your 99W figure does not represent the
maximum thermal load a P4 can draw if it didn't throttle. Quite a few critics of Intel's TDP
numbers like to ignore them and multiply Icc_max and Vcc_max and present that as
P4 maximum power from a thermal burden perspective. You happened to be the unlucky
person I chose to clarify this point with. You have found a second flaw in this approach
(ie. the load line) and I thank you for providing me even more ammunition to debunk this
myth.
I think that it is implicit that actual maximum power dissipation is transient in nature and
would be sufficiently absorbed by the thermal inertia of die, heat spreader and cooling
solution in most cases. However, I feel quite certain that transient power peaks driven by
the maximum load as defined in the loadline would significantly exceed 10 clock cycles.
I think it is hard to argue that the instanteous maximum current draw can occur when the
P4 is not issuing the maximum 6 uops per cycle through the six dispatch ports. This can
only occur for a short time since the front end can only fetch 3 uops per cycle (actually 6
uops every other cycle but that isn't relevent here) and the back end can only retire 3 uops
per cycle. So we have a simple queueing problem. The P4 has a uop buffer that can be filled
at a rate of 3 uops per cycle and can be drained at up to 6 uops per cycle. I don't know if
Intel has ever disclosed the uop buffer capacity in either WIllamette or Northwood but it
is likely less, possible far less, than a hundred entries. So under the best possible
circumstances the P4 can issue at 6 uops per cycle (hence drawing Icc_max) for only
a few tens of cycles before the buffer drains and the core is limited by the front end to
3 uops per cycle and less than Icc_max draw.
For a bunch of guys who think they are so very
smart, you seem to be missing this fairly obvious point.
This occurs at a voltage well below Vcc_max, but at Icc_max.
The value of Vcc is immaterial to my argument.
You are still missing the essential point that average current draw over
any time period comparable to the thermal response time constant will be
less than Icc_max. The part may draw Icc_max for 1 or even 10 clock cycles
(and the power supply has to be able to handle the instaneous draw) but from
the point of considering the thermal design maximum power in the absence
of throttling the maximum power will be less than 99 W. Icc_max is not a DC
value.
No, I am saying that for design structural reasons Icc_max cannot be sustained
for a long enough period for the maximum power possible for P4 thermal design
considerations (if throttling was disabled) to approach the product of Icc_max and
Vcc_max. Thermal time constants are in the milliseconds while the P4 can drain
a full uop buffer dry in nanoseconds.
I guess I can't expect an non-EE to understand the concept of instantaneous power and
how that relates (and doesn't relate) to other design issues. My bad.
Increasing Vcc will not have any effect on yields. It will however enhance binsplits.
Yield doesn't always mean functional yield.
Binsplits are also called the AC yield across speed grades.
So, for a Vcc_max of 1.417V at an Icc_max of 70A, the
maximum power is 99.19W. Now, I recognize that this is
not likely to be seen except for the worst of circumstances,
but if we are to speak of maximum power, this is the figure that should
appropriately be used.
That is the maximum instaneous power consumption but there is nothing
that says that this can be sustained over a period comparable to the thermal
time constant of the die or package.
The current consumed by microprocessors varies from clock cycle to clock
cycle depending on what is going on inside at that instant. Icc_max likely
occurs when a P4 issues 6 uops in a single cycle while performing a
write transaction on the FSB and internal L1 to L2 transfer or vice versa.
I think you will agree that a P4 can not sustain the issue of 6 uops per
cycle for very long. Even disregarding misses, the trace cache can only
fetch 3 uops per cycle and the uop buffer between the trace cache and
scheduler will quickly empty.
For the purposes of thermal design it is likely that the maximum power
that can physically be dissipated by a P4 is a fraction of your 99 W figure.
chipguy, you probably don't read articles that are not approved by your Intel PR manager.
Keeping up your batting average I see. I don't work for Intel. Never have and probably
never will.
chipguy, you probably don't read articles that are not approved by your Intel PR manager.
Keeping up your batting average I see. I don't work for Intel. Never have and probably
never will.
I said several times there is no way that I can put a 80 W chip into my computer - from my
personal point of view that is garbage.
You mean to say is wouldn't chose Intel for religious reasons. The 80 W thing is nothing
but self-rationalization you put forth not to look dogmatic. Here's why:
The Tbred/2.25 dissipates 76 W max. But AMD spec's Athlon Pmax at Vdd nom and over a
limited set of tested applications. In other words it is far from being a valid maximum power.
It could easily dissipate above your 80 W "garbage" threshold simply by going to Vdd max.
Did they overclock? Maybe we just need more politically correct word for that?
Let me type this slowly so you can understand. Overclocking means operating a uP outside
of its vendor specifications. The vendor can set the specifications anyway it sees fit with the
understanding that it is trying sell a manufacturable, durable, and robust product for a profit
and that its reputation with OEMs and buyers is on the line. By definition a vendor does not
sell overclocked chips.
From that point AMD dirty tricks with QS ratings will be forgotten as irrelevant.
Yeah, nobody ever says anything bad about Cyrix and its PR scheme.
BTW, given the way AMD enthusiasts love to trot out the 820 chipset and P3/1.133 recall
I doubt there will be a shortage of Intel enthusiasts to remind us of the final pathetic
days of Athlon QuackHertz.
That's 25%
less power for 15% less frequency. The additional dissipation of the highest clocked
Pentium 4 tells me that Intel has raised to voltage to increase bin splits.
Keep in mind the P4/3.0C is running its FSB at 50% higher signalling rate than the
P4/2.53. External I/O drive relatively large capacitive loads and consume non-trivial
amount of power. That would account for much if not most of the P4/3.0C's additional
power consumption above a linear extrapolation of P4/2.53.
My understanding is that besides the Northwood P4/3.2 there is a Northwood P4/3.4
speed grade fully qualified but it probably won't be released unless there is a last
minute snag with Prescott that pushes it back by a quarter. The Prescott would be
preferred because of its lower power consumption at 3.4 GHz and higher margin.
instead they chose the pretty slow, end-of-life, but widely available Xeon
ROFL. Do you want some cheese to go with that whine?
Lose most of the server market, apparently.
Sun Microsystems doesn't seem to share your opinion of Intel processors. Sun
passed over Opteron yet again, what's up with that? Oh, I forgot - all the other
major OEMs using Hammer have bought up all of AMD's future production and
didn't leave any for Sun, LOL.
My opinion: Any site that puts their material on the web with no restrictions on who can view
it is open to copying. You can't put it out there for everyone to see, unrestricted, and then claim
there are restrictions.
So you think that all owners of content that is broadcast via unencrypted RF, i.e. standard
TV and radio, loses their copyright? Macdonald's puts a yellow stylized "M" outside all of
their restaurants for "everyone to see, unrestricted"; does that mean anyone can use their
symbol?
I don't see any difference between your moral standard (or lack thereof) and claiming that
anyone finds a house or apartment with an unlocked door is free to help themselves from
the contents.
My opinion: Any site that puts their material on the web with no restrictions on who can view
it is open to copying. You can't put it out there for everyone to see, unrestricted, and then claim
there are restrictions.
So you think that all owners of content that is broadcast via unencrypted RF, i.e. standard
TV and radio, loses their copyright? Macdonald's puts a yellow stylized "M" outside all of
their restaurants for "everyone to see, unrestricted"; does that mean anyone can use their
symbol?
I don't see any difference between your moral standard and claiming that anyone finds
a house or apartment with an unlocked door is free to help themselves from the contents.
As you can see the 3 GHz Pentium 4 is faster than the Athlon XP 3200+ in 7 out of
10 games.
Ouch! The Cyrix-ization of QuackHertz has entered its final phase.
Good thing gaming doesn't drive high end PC sales.
Sure, by leveraging most of the work that Intel already put into the DB2 port. Once Intel
helped IBM to rewrite the code to be 64-bit friendly, of course a port to another 64-bit
architecture was easy. Let's see how many applications AMD is able to port without Intel's help.
I agree with the idea that DB2 was probably 64 bit clean prior to the AMD64 port. But I wouldn't
necessarily credit Intel with that. IBM has its own 64 bit RISC family and AFAIK DB2 has been
running on that under AIX for a while.
Nonetheless, it is illegal to copy someone else's work, in whole or in part, without
their permission. It is no less illegal if you only copy part of it.
Wrong. There is the notion of fair use:
http://www.bitlaw.com/copyright/fair_use.html
"Nonetheless, there are some traditional activities which have been used to illustrate
when the fair use doctrine would apply. These activities include:
- small excerpts in a review or criticism for purposes of illustration or comment;"
LOL, straight to the moon!
One clinical definition of insanity is endless repetition of a self-destructive
behaviour with the expectation of getting a different outcome.
Have you ever heard of copyright and fair use?
I was referring to my Sunday newspaper ads --> 0 Centrinos offered.
What newspaper is that? Laptop computers are kind of upscale items and
your paper's readership demographics may not make it first choice in your
area. Does it have a lot of ads for paternity test clinics, check cashing, and
credit repair doctors?
What Hans de Vries discovered looking at Prescott's die should be enough to run
some 64-bit demo with this Chip at launch.
Hans started with the conviction that Prescott included 64 bit capability and then
went looking for it using digital die photo images of limited resolution. Like people
who expect a priori to see a face on Mars or Jesus on a certain taco shell, Hans
thinks he found what he was looking for. I don't know if Prescott has 64 bit capability
or not but Hans's "evidence" that it does is far from convincing.
But I'd be interested in your--or anyone else's--opinion on how long (and what) it
will take for AMD64 to become entrenched and Yamhill-proof.
About as long it took 3Dnow to become SSE proof. Wait, AMD processors support
SSE. About as long as it took TFP to become SSE2 proof. No wait, AMD dropped
that from K8 early on in favor of SSE2.
Let's just say longer than it will take AMD to add IPF compatibility to K9 ;^)
As he is the most influencing person in computer industry there is no other
second person who can deny what he's saying.
ROFLMAO! He's a minor folk hero for geeks and enthusiasts. Beyond that he is
a salaried programmer for a soon to fail and be forgotten company.
It's like discussing the events of nature - if you will say that it doesn't rain when
it does people will think you are stupid.
So you consider Linus infallible, like a geek pope?
L. Torvalds had some negative things to say about Itanium (EPIC) architecture.
He works for Transmeta right? A microprocessor company quickly going broke
from trying to compete with Intel. May not be an unbiased source of opinion on
EPIC hmmmm? Aside from that, AFAIK he has no experience or track record in
compiler technology, chip design, or computer architecture. Other than his
celebrity status from Linux is there any particular reason the opinion of this
particular programmer is more noteworthy than any other?
Intel is trying to convert shooting itself in the head with Itanium (making Intel's 64-bit
strategy synonomous with the very iffy Itanium architecture) to shooting itself in the foot
with Itanium
If you want to determine the uP vendors that are suffering self-inflicted gunshot
wounds, as you put it, perhaps you should just follow the rivers of red ink to their
sources.
I suspect this is an understatement. Hasn't Intel taken a lower road by using an architecture that
is easier to design in silicon, but more difficult for the software folks?
Or, maybe that is the higher road. And, Intel is right and everyone else is wrong?
That remains to be determined. EPIC is neither the second coming of RISC nor the disaster
proponents and opponents make it out to be.
I suspect in the end IPF will prove better than some RISC ISAs and worse than other RISC ISA
on a technology/implementation normalized basis. But with CPU transistor counts exceeding
10 million and the CPU/memory imbalance growing, the effect of architectural choice has a
weak influence on performance and cost compared to 1) process technology, and to a lessor
extent 2) implementation quality. The only major exception to that is x86 is really brain dead for
floating point and needs a lot a work in implementation and software to make up for it compared
to RISC and EPIC.
What does IPF stand for?
Itanium processor family. Intel reportedly changed the name from IA-64 to reduce the emphasis
of 64 bitness being a primary feature.
However, Opteron in 32-bit mode takes full advantage of compilers optimized by Intel. 64-bit mode
is way behind in compiler technology, so don't be fooled by comparisons between a mature
compiler technology and one that is teething.
I don't think many people would deny that AMD64 compiler technology is in its infancy. But AMD64
compiler technology is essentially an off-shoot of mainstream x86 compiler technology so it will
likely approach its asymptotic limits rather quickly. Likewise computer enthusiasts should keep in
mind that IPF compiler technology, although years ahead of AMD64, is still relatively young for an
entirely different pardigm with at least a decade of steady and significant improvements ahead of
it if it takes a similar path as RISC compiler technology.
Bonefish, that may be Intel's answer, but it is a very weak answer. Software emulation of
x86-32 on a different architecture (expecially one that clocks at a third the speed of a real x86)
will always be pathetic.
Emulation will always be pathetic but dynamic binary recompilation/translation is a proven
technology. For a while using FX!32 on Alpha was the fastest way to run many x86 binaries.
That specific technology achieved over half of native Alpha performance on x86 integer and
floating point intensive codes. IPF may be a more difficult platform for this than Alpha but
more than five years have passed to improve the algorithms.
No rational IS department will run x86 software on Itanium, except perhaps for offline
utilities which don't need speed. If they are true-blue Intel then they are much better advised
to run a Xeon server next to the Itanium server, and migrate as Itanium software becomes
available.
No one claims anyone will buy IPF systems to run x86 code as their primary production
programs, those will be native. But most companies have hundreds of secondary
programs, many developed in house, that can't realistically all be ported to IPF. These
perform important and not so important functions that aren't on critical paths or burn many
CPU cycles. The purpose of x86 compatibility for IPF, be it HW or SW based, is to run those
secondary legacy apps. If Intel can just approach FX!32 operating efficiency with its own
software scheme then that should have more than sufficient performance for this role.
Tell me one thing though, why are Intel products so overpriced?
Lesson of the day for comrade edgarcayce:
Q: What is the value of something?
A: Whatever you can get somebody to pay for it.
If Intel's products were overpriced then they would go unsold.
But both P4 and Athlon seem to be quite "testable."
All chips are testable to an arbitrary degree, the question is what to test for since testing
for everything is absolutely out of the question. The I2 has been in production for 9 months
while P4 and K7 microarchitecture chips have been in production for years so their test
coverage is better understood. IIRC AMD had an issue with some K7s producing incorrect
results for certain SIMD code sequences a while back. That sounds a lot like the I2 bug.
No it hasn't. You think AMD lives in another universe where defects don't happen?
AMD was clever enough to use a delicate package so users will assume
their dead/malfunctioning Athlon was their own fault. Affected users don't
want to appear too stupid to correctly install a heatsink so instead of
complaining on DIY sites they silently go out and buy a new processor.
FACS: When will Intel wisen up and recall Itanic2
Be careful what you wish for. IF Intel was to drop the Itanium...
Do you really think the purpose of this polemic was to engage in discussion?
Why Prescott in 2003, but not Dothan? (EOM)
IMO because Prescott will bring in the most revenue and/or profit per 90 nm
wafer and thus represents the best economic choice during ramp up when
capacity is limited.
Some of the newest designs are using a hybred Logic/Flash/Analog process at 130nm.
I thought that thing was a hybrid module that bonded together separate die made
with different process/process options for the logic, flash, and RF functions.
From the table, I think only HP ships Xscale (right ?). They sold ~430k units per quarter
in Q2, 02. Assuming that shipment stayed flat, we are looking at an additional 3.9Mil Xscale
untils per quarter. As ASP of ~$75 would result in an incremental $300mil per quarter. Not
bad, if his prediction comes true!
The ASP for Xscale is probably less than half of your figure. OTOH the total market for
ARM architecture chips right now is about 140m devices per quarter and has been
growing rapidly for the last 3 or 4 years. The PDA market is sexy but only a tiny sliver of the
overall ARM market. The bread and butter is in cell phones. Third generation cell phones
are comparable to PDAs in compute capability and Intel seems to be very successful in
scoring multiple design wins for upcoming 3G phones. This segment is a tiny fraction
of the cell phone market right now but if it takes off then we might see Intel's shipments
of Xscale soar into the many 10's of millions per quarter for just that one application
segment. And don't let the ASP fool you, these are small devices made in fully paid
out 180 nm fabs so the margins are probably incredible. Then perhaps the telecom
division will switch from being a sinkhole into a money machine.