HD cluster size vs. performance?
The standard cluster size for a FAT32 HD is 4K, using WIN98. Would changing it to 16K, 32K, or even 64K improve the HD performance? I realize that with the larger cluster sizes there would be a lot more wasted cluster space. Would there be any problems using the 64K cluster size? I've got the Partician Magic program, so changing them is no problem. I've got a 9.1 gig HD, so wasted cluster space wouldn't concern me at the moment. Performance is more important.
When I get some money, I plan on replacing my HD with, hopefully, four new Western Digital 9.1 gig Elite 7200 rpm HD's, together with two Promise FastTrack66 RAID controllers, setting them up as RAID 0.
The following cluster sizes are the defaults for FAT32 drives:
Partition size Cluster size
less than 260 MB 512 bytes
260 MB - 8 GB 4 kilobytes (KB)
8 GB - 16 GB 8 KB
16 GB - 32 GB 16 KB
greater than 32 GB 32 KB
Your cluster size is already at 8kb. Changing it wouldn't make any difference that the naked eye could detect.
Yes it would help. But I think partitioning it and installing all your other apps onto the other partitions would be a much better option of speeding up performance than increasing the cluster size.
I'll take two... CPU's
The way I look at it is, the smaller the cluster size, the more overall clusters you have on the HD. That being the case, in searching across all these clusters for data, the more clusters you have, the longer it should take. On the other hand, the larger the cluster size, the less overall total clusters you have to search through, thus making it faster to find data.
The only drawback with the larger cluster size, is the larger waste of space, but with HD's getting so large, no way can anyone use it all anyway. I have a nine gig HD, loaded with programs such as MS Office, Mechanical Desktop 1.2, and ANSYS Finite Element Analysis, just to name a few, yet I still have over five gigs available.
Rather than making HD's any larger, they should be concentrating on speed and performance with IDE's. The general consumer is never going to use all that space on todays new IDE HD's. The only advantage to these bigger HD's, are for servers, in which case, you would go to a SCSII HD.
I'll take two... CPU's
Funny, I have to keep deleting stuff off my 10.2G IDE, and shuffling stuff to my SCSI 3.5.
I need one of the 27G 7200RPM drives I saw at the computer show, $315.00 thats all it cost! Really thinking about it now.
The reason I don't like all the partitions is all the drive letters being used... When you already have 3 drives and 2 CD's, things get confusing real fast!
i am obviously missing something here, my old 40 mg western digital seems alright
I ran some numbers using the ThreadMark benchmark program.
data transfer rate - 6.41 mb/sec
average cpu utilization - 22.37%
data transfer rate - 6.41 mb/sec
average cpu utilization - 18.37%
data transfer rate - 6.44 mb/sec
average cpu utilization - 18.44%
It appears the only advantage is the reduction in the average cpu utilization, though not by much.
I'll do more testing using audio/video, if I can find any bench programs that test them.
I read that for A/V, the larger the cluster size, the better.
Gordon explained it pretty good. Actually explained it better than I would have. Kind of norma isn't it? hehe
We've been installing apps like that for many years. You can protect the data, dowloaded files, etc. better by placing it on a different partition other than C drive.
As to benchmarking, I still don't think many of them are very accurate. Besides, for the average user who never defrags, does it really matter that they get a 1% or 2% or even a 5% increase in speed? It's hard for us to notice it. As for people like us, we want stuff that improves speed and performance.
Gordon I differ with you a little bit on one thought you shared. And that is they wouldn't ever fill that hard drive up. I still remember in the days of the 212 MB hard drive when we thought we would never use it all up. Well sooner or later the programs will get larger and the space will be needed. But I'm in total agreement with what the HD manufacturers need to do
I guess I am looking in reference to what is available today, and not tomorrow, in both software and operating systems. When you bought something like a HD, you use to look towards what you may need in the future, and not today. Problem is, that was find for HD space, but another factor has suddenly changed that formular, the performance. HD performance, on the most part, has for many years been stagnate, especially with IDE drives, so that wasn't an issue. That has suddenly changed with faster drives both in rpm's and in data transfer rates. Even a fairly unknown technology, that has been around for a little while, has suddenly burst into the awakening, RAID (Redundant Array of Inexpensive Drives). It is a process of running as much as four HD's in an array, such that, when set up properly as a RAID 0, it is looked upon by the system, as a single drive. Four 9 gig drives would be seen as a single 36 gig. It is available now, software wise, in NT 4.0 and in the newest LINUX kernals. It is also available, hrdware wise, for both SCSII and IDE. From the numbers I have seen, in reference to IDE drives, you are talking over three times the data transfer rate over a single HD.
The biggest problem I see with the availability of so much HD space, as well as RAM, is the practice of sloppy software and operating systems. When memory is no problem, it only stands to reason that they will waste it, and do.
I still remember writing programs in the past at work with as little as 8K of memory. Every time I got it to work, my boss would want me to add a new feature. It was challenging, and very efficient, program wise, maybe not time wise, and most of all, fun. I wish things like Win98 were written as good, but again, memory is no object.
Presently I'm looking into getting back into software writing again. I just don't know where to specialize in, either Visual Basic or C++. People at Microsoft tell me Visual Basic, with C++ a very close second. Others just say C++. Then the question is Visual C++ or Borland C++. I was told the professionals like Borland. But that may simply be a bias against Microsoft
New Security Features Planned for Firefox 4
Another Laptop Theft Exposes 21K Patients' Data
Oracle Hits to Road to Pitch Data Center Plans
Microsoft Preps Array of Windows Patches
Microsoft Nears IE9 Beta With Final Preview
Simplified Analytics Improve CRM, BI Tools
Android Passes RIM as Top Mobile OS in 2Q
VMware Updates Hyperic System Management
File Monitoring Key to Enterprise Security
LinkedIn Snaps Up SaaS Player mSpoke