Jul 3, 2010

32-bit vs. 64-bit Computing - What's Really The Difference?

Sixty four bit computing, has been around since the beginning of computing, but it wasn't in the mainstream consumer marketplace until just a few years ago, while AMD announced AMD64 almost a decade ago. Getting the rest of the market to join up was aided by Intel's own implementation called Intel® 64. From there, you needed an operating system that supported the technology and applications that were compatible to take advantage of it. Before Windows® Vista and OS X 10.6, that was hard to come by. You've probably seen that Windows® 7 comes in 32 and 64-bit version and wondered what the difference is.  Even now, I'm still surprised at how long it's taken to convert everyone over.

Without getting too technical or doing any math, I'll explain the difference and why you should go with it.

The Difference:

In computing architecture, 32-bit and 64-bit refer to the size of data in terms of integers and memory addresses. CPUs and memory simply support 64-bit long values. It's easy to think about it in terms of literal addresses, too. Say you have a phone book, we'll call it a the 32-bit phone book, and the integers are the contact information for people. The 32-bit phone book can list a total of 4 gigabytes (GB) of memory, or over 4 billion integers. So this "32-bit phone book" can hold the names and contact information of all the people on the planet living today. Contrast that with the range of 64-bit addressing, which is over 18 quintillion integers and more memory that you get get right now, and you can say that the "64-bit phone book" would be able to store the names and contact information for all the people that ever were or will be on the planet.

For your computer, this gives you support for more system memory, and that means better multitasking and generally improved performance across the board.
Related Posts Plugin for WordPress, Blogger...