Probably not. My guess is they went for the holy grail of OOO execution processors but couldn't get it working in time. The holy grail being dynamic multi-threading or DMT.
Imagine a hyperthreaded processor but instead of having to run two separate threads explicitly coded by programmers the CPU runs along one thread until it hits a conditional branch that it knows from its history tables is hard to predict. So instead of making a likely poor guess it starts a second thread - one thread goes down one path and the other thread goes down the other. When the branch condition is resolved the incorrect path thread is collapsed and its processor state is reloaded with the correct thread's state so the whole process can start again.
This is fiendishly complicated because neither thread can update memory or do anything else permanent until the right path is known. That means a whole bunch of operations have to be accumulated in two separate groups and then either discarded or quickly emptied out to memory. It is easy to imagine DMT soaking up tens of millions of transistors.
If done right it could be a big speed up (it isn't uncommon for OOO processors to execute but then discard the results of up to 3 or 4 instructions for every 10 that are retained in integer programs because of execution down wrong paths). But one subtle little logic error or one conceptual corner case unconsidered and your processor goes off to lala land and it would be very difficult figuring out why.
HT wasn't ready in time for Willamette's release and in terms of complexity DMT is like HT squared. If Intel has attempted DMT in Prescott then it's hardly surprising if it is still very much be a work in progress.