not knowing how os x truly works, i cant say whether files and their (up to 4) forks should be contiguous for best performance or not. i do know that many os x defragging tools do NOT worry about keeping individual forks contiguous with the entire file, rather they only keep forks contiguous with themselves. also, though "legend would have it"..... hfs+ would have been logically much better at staving off fragmentation than the competitors when it came out (os 8.1, i was wrong earlier) simply by the nature of how small of a segment of a harddrive it could allocate (smaller allocation levels=lessroom for fragmentation. i know fat16 was horrible when it came to getting fragmented, and NTFS has improved on fat32.... in the same way hfs+ had big issues when it came on the scene for macs, but with its current state on 10.3, it even self-defrags files up to 25? KB.... much better than any other desktop option out there. sandor ----------------------------------- On Apr 4, 2004, at 9:40 AM, Alex wrote: > > On Saturday, Apr 3, 2004, at 15:46 Canada/Eastern, sr ferenczy wrote: > >> [...] actually, according to many, including david shayer in this >> article: >> http://db.tidbits.com/getbits.acgi?tbart=07254 have pointed out that >> the OPPOSITE of what you are saying is true. [...] > > "[...] Legend has it that the FAT file system was pretty bad about > fragmenting files [...]" > > Legend. > > David Shayer's article is interesting, but quite incomplete. (No > mention of resource forks -- for those more directly interested in > things under the hood --, or of issues related to video editing, large > databases, etc.) > > f