News Focus
News Focus
icon url

pgerassi

01/05/06 3:41 AM

#68761 RE: wbmw #68740

Wbmw:

You are confusing addressing space with physical memory space. Linux, Windows and various other OSes use the HD to expand physical memory by using virtual memory. All modern x86 CPUs do the virtual to physical translation using page tables and TLBs as caches to speed that up.

On Yonah, you can only allocate 4GB of memory space to the various applications, kernel, drivers and devices. It is not uncommon for 128MB of graphics memory to be allocated 256-384MB of address space. The kernel takes about another 64-256MB deending on what it is needed to do and the various loaded drivers.

When you run applications each is allocated address space for the code, space for the data, space for the stack and space to heap or temporary data. It is not uncommon on those machines that do many tasks at the same time to allocate more address space than the machine has in physical memory, just the kind of environment where dual cores are desired. Then the OS uses an area of disk known as the Windows Swap File or Linux as the swap partition, to use as additional virtual memory. On my home box I have 4GB of swap and 1GB of physical memory (plus 256MB in the video card). It doesn't use much of that swap (mostly due to the efficiency of Linux and its applications, but on Windows, it uses up to 2GB) but, to take advantage of all of it, I would need to run a 64 bit OS on a AMD64 system.

So you don't need to have 2GB of physical memory to need the larger 64 bit addressing space. Everything you run simultaneously must allocate no more than 4GB. Unless you would want to swap the entire application (actually the running code and all associated data of the program currently running) to disk. Its slow, but you can run. SCs run just fine that way, but are slow and the system can appear to go away for a while while the programs swap in and out. For DCs the problem is that both programs running on both cores have to be in memory at the same time, else you get no benefit from the second core. So a DC machine needs more memory rather than less.

The AMD64 machines can allocate 48 bits of virtual memory addressing space (256TB) even though there might be only 1GB of physical memory. It can simply swap in the pages (4KB is typical for PCs) needed by all the running programs and kernel. Thats a far smaller subset of all of the virtual memory. And they can get swapped in and out much faster than the GBs of of programs in the other method. That reduces any apparent going away periods.

The 2GB limit in most PCs is mostly due to the Windows XP only using 2GB due to the way XP allocates addressing space. The other 2GB are reserved for the kernel, drivers and devices. This partition point can be changed as a boot parameter to 3GB for apps and 1GB for the rest but, only XP pro and Win2000 allow it (XPhome and all of the other MS OSes can't (except XP64 (and later Vista) of course)). In Linux, it is a kernel compiler config option. To efficiently use more than 2GB, you need AMD64. And installable memory sizes are ratcheting up as we speak.

If you talk to Linus Torvalds, he has stated that x86 needed AMD64 extension to 64 bit addressing space for the last 5-6 years. It was getting more and more difficult to be more efficient in allocations to keep the footprint within 4GB. 48 bit virtual addressing space of K8 was a godsend and they can (and do) use it to better optimize performance and increase security. And most x86 OS designers and maintainers agree with him on this topic. Including those at Microsoft.

Pete
icon url

Michael Moy

01/05/06 7:42 AM

#68766 RE: wbmw #68740

> Given the miniscule percentage of the market that upgrades
> hardware, I'd have to say the lack of large memory options in
> configurable PCs today points to a lack of demand, and that
> pretty much renders the 64-bit option irrelevant for consumer
> PCs for the time being. Even if someone were to buy a 64-bit PC
> today, they couldn't get the memory to future proof it in the
> way you are arguing that a 64-bit CPU can future proof it.

This isn't true. There are systems being sold today with 256 MB of memory on the cheap. Do they page and swap? Sure they do. Is this ideal? No. But it works for some customers. And you could extrapolate that to 64-bits as well.

As far as the upgrade considerations go, consider that the new
WMF hole in Windows 98/ME is a killer problem and that Microsoft plans no fix for it. ars technica is recommending that users upgrade. An XP upgrade is $100 to $150 retail to
just home. In most cases, users would be better off upgrading
their hardware. If XP Home loses security updates in 2007,
that's another reason to upgrade hardware. Especially if that
forces you into Windows Vista. Windows XP Pro is an option
but you're paying $150 to $200 of something that's going EOL
two years later.

> My argument is to go the dual core route, because at least
> then you have applications and an OS that already supports
> it, with more to come in the future without needing
> expensive hardware upgrades.

If Microsoft is serious about dumping XP Home in 2007, then you
don't have the OS.

> I shutter at the thought of memory bandwidth being limited
> by an I/O pipe the width of USB. USB can only sustain about
> 40MB/s of bandwidth, and you want to use it as a 64-bit
> memory expansion? Ugh.

Do you do performance enginering? I do and I'd be happy to have
USB 2.0 performance for paging.