Experiments on Oct 7
For a pure text version of experiment data here
Conclusion: Kernel 2.4.10 with NIC card driver compiled in directly is much faster in ip forwarding than Kernel 2.4.10 with NIC card driver compiled as module.
What leads to today's experiments?
When I boot kernel 2.2.16-22, the kernel will take onboard 3Com NIC card as eth0, tulip NIC 1 as eth1, tulip NIC 2 as eth2. When I boot kernel 2.4.10, the kernel will take tulip NIC 1 as eth0, tulip NIC 2 as eth1, onboard 3Com NIC as eth2. So each time when I reboot to a different kernel, I must either change the network interface setting or re-wire the cables. In order to avoid re-wiring( sometimes it's impossible to do re-wiring when you reboot to a different kernel remotely), I tried to compile the Tulip driver as module so that kernel will always take 3Com NIC card as eth0 when system boots up. But I found the throughput is only about half of the capacity. So it leads to the following experiments.
The network infrastructure is the same as before ( the default one ), all experiments refer to experiment 2
All 3 computers ( madrid, dublin, prague ) are running kernel 2.4.10
NIC card information:
eth0:3Com
eth1:tulip1
eth2:tulip2
eth1 and eth2
are involved in the static routing experiments
Experiment Data When Tulip NIC driver is compiled to kernel directly (eth1:tulip2 eth2:3Com) |
||||||||||
Speed | 1st | 2nd | 3rd | 4th | 5th | 6th | 7th | 8th | 9th | AVG |
Transmitting Side(KB/sec) | 11549.29 | 11529.73 | 11536.11 | 11535.76 | 11524.24 | 11497.34 | 11494.47 | 11497.35 | 11496.64 | 11520.54 |
Receiving Side (KB/sec) | 11476.33 | 11478.58 | 11474.88 | 11479.03 | 11479.54 | 11488.55 | 11485.59 | 11488.09 | 11487.90 | 11482.21 |
Data Sent (Byte) | 16777216 | 16777216 | 16777216 | 16777216 | 16777216 | 124835840 | 124835840 | 124835840 | 124835840 |
From the above experiments on experiment 2, we achieve a full throughput
on routed connection. 11500KB/sec. This is what we expect.
We can conclude that:
If 3Com card is not significantly faster than Tulip chip (
NOTE: with Tulip driver directly compiled to kernel
2.4.10. the order of eth? is changed.
eth0:tulip1 eth1:tulip2
eth2:3Com )with NIC driver directly compiled in kernel, the routing
speed improves significantly (about 2:1)
To remove the possibility that the improvement of routing speed is caused 3Com
card, we configure the NIC card so that the NIC cards involved in routing are 2
tulip chips.
Experiment Data When Tulip NIC driver is compiled to kernel directly ( 2 tulip cards are envolved in routing ) |
|||||||||
Speed | 1st | 2nd | 3rd | 4th | 5th | 6th | 7th | 8th | AVG |
Transmitting Side(KB/sec) | 11530.66 | 11531.83 | 11528.91 | 11535.87 | 11494.03 | 11495.83 | 11497.26 | 11495.93 | 11516.34 |
Receiving Side (KB/sec) | 11474.57 | 11478.03 | 11481.41 | 11480.12 | 11487.71 | 11488.64 | 11488.69 | 11488.18 | 11482.74 |
Data Sent (Byte) | 16777216 | 16777216 | 16777216 | 16777216 | 124835840 | 124835840 | 124835840 | 124835840 |
From the above experiments we do see that the speed is still about
11500KB/sec
So the improvement is not significantly because of NIC speed difference between
3Com card and tulip chips
So, we can draw the conclusion:
With NIC card driver directly compiled into kernel, the routing speed improve
a lot, compared the case that NIC card driver compiled and loaded as module. The
ration is about 2:1
When I was doing the above experiments,
I am before madrid.
one telnet session to prague.
one telnet to lava
NOT active while experimenting
one xserver holding netscape session from lava. NOT
active while experimenting.
For the detailed data of experiments See here