My previous experiences regarding my new gigabit network were disappointing. I studied the problem further by using iperf to generate raw traffic between my two Linux PCs and using KSysGuard and iftop to monitor the traffic. Now I got a whopping 950Mb/s, near the theoretical maximum. Well, it should have gone over 1000Mb/s, so there's still something lacking a bit.
The gigabit LAN with 1000baseT/FD link mode is supposed to be full duplex, so I tried two-way traffic with iperf. The result was, however, roughly a gigabit in total, so in effect it was half-duplex. This may be due to the PCI bus that the integrated network card uses. Normal PCI is a 32-bit bus and operates at 33Mhz frequency, giving (32*33M) roughly one gigabit of capacity. But, that is the entire bus capacity, it is shared with other devices, and full duplex gigabit would need twice that. So, ok, I got near the maximum.
The performance of Buffalo LinkStation still feels strange. With KDE's file manager as well as with smbclient, I got some 10MB/s. By actually mounting it as a filesystem with smbmount, I only got a fraction of that, perhaps some 1.5MB/s. That's 1.5% of the "gigabit" capacity! With Windows XP under Vmware, I got about the same as with Linux smbclient, some 10MB/s. I'm yet to test it with non-vmware XP more accurately, but it may be a bit faster.
1 comment:
Gigabit is a standard, and while hardware serves to try to live up to that expectation, it rarely does.
With a good switch, good cables,and two fast clients, you should be able to do 400Mb/s, and at the very least, 200-300Mb/s. The linkstation depends on drive controllers, converters, and a hole host of other intermediary devices that result in terrible performance. Usign a homebuilt NAS I was usually average about 300Mb/s
Post a Comment