On Tuesday, December 17, 2002, at 04:15 AM, Vard Nelson wrote: > We transfer large files (>> 2 GB) across the network. At "100 mb/sec" > it takes 13 minutes/2GB, much longer than the theoretical 100 mb/sec > (12 > MB/sec), and a Gigibit switch only decreases the time to 11 min / 2GB > file. What goes on? Are there software products or whatever that help > achieve closer to theoretical speeds? Things that make a difference: 1. Cable quality. 2. Switch quality/setup. 3. Client hardware speeds (CPU/Disk/NIC/RAM). 4. Server hardware speeds (CPU/Disk/NIC/RAM). 5. Client OS software. 6. Server OS software. 7. Transfer protocol. A long time ago I gathered a bunch of results and put together a table with some relative comparisons, benchmarking different AppleshareIP server solutions: http://www.opus1.com/ron/asipstats.html It's quite a bit outdated for what's currently on the market hardware/software wise, but it does a good job, in at least this case, of showing the *extremely* wide range of performance in the various products that were on the market at a given point in time. For a 100Mb link, it was possible to see everything from 2 to 65 mb/sec. with some pretty ancient hardware... (If anyone wants to add numbers for newer hardware, send it on in! I hear that current speeds are hitting up to 80mb/sec with current OS X boxen.) Typical bottlenecks usually include slow server OS's (*cough*Microsoft SFM*cough*), cheap "switches" that were designed for "workgroups" (not core infrastructure), cheapie NIC's, slowish IDE drives, and misconfigured switches. -Bop