For any of you that are wondering.... chips are going 64 bit so they can handle more impressive hardware.
More bits equates to better multi-tasking, NOT increased speeds.
Err... no. moving to 64bit won't have any affect on multitasking. It will increase the size of data sets that can be stored within the CPU, which means access to larger data stores (ie: indexing of large databases, referencing large areas of memory, etc).
Mathematically speaking 2^32 = 4,294,967,296 (roughly 4 billion). ie: 32bit CPUs can address 4 billion bytes of memory (4GB RAM maximum for Win32, and similar 32bit OSes).
2^64 = 18,446,744,073,709,551,616 (18 "quintillion bytes"), or ~18 ExaBytes. Remember we are squaring the possibilities here, not doubling.
http://www.magictree.com/dataprefixes.htmMultitasking a few consumer-level apps and 3d video cards will see no benefit from 64bit. You will see large sized database/set access speedups, but only because CPUs won't have to do multiple passes to get to indexes of indexes for large data sets. And by large sized databases I mean MASSIVE - as in "tax data for every single US company and citizen" massive, or "complete DNA map of entire species" massive. Likewise for our scientific and engineering friends, accuracy to smaller decimal places will no longer need to be done in multiple calculation passes due to hardware restrictions.
PCI Express bandwidth is hardly a drop in the ocean for the sorts of things 64bit calculations will scale to. In fact, 32bit is more than enough for the average consumer's needs for quite some time. As mentioned, only the high-end professional arenas will need the power of 64bit n the next 5 years.
I mean honestly, when was the last time your video card needed to address 4GB of texture storage for a single scene? Hell, I've got mates who render special effects for hollywood films, and their scenes aren't that large. And yes, they all happily render on 32bit consumer processors.
The reason we see 128/256 bit video cards these days is not that the data accuracy needs to be that great, but instead that the pipeline from the GPU to the memory needs to send and recieve more data in parallel. I think folks are getting mixed up with the bandwidth of a data bus compared to the storage registers of a CPU.
Quite frankly Howard, with you being a programmer I expected a better explanation than that from you.