Yee's Homepage | Monitoring | Iperf Investigation into Sender (Server) windows with Iperf AIM: to see if the bugs found with the client side of iperf also occurs with the sender side. METHOD For the same value of the client socket buffer size, alter the sender socket buffer size and monitor with web100 to see the tcp values obtained of sndwin. SCRIPTS Modified scripts of the previous experiments were used that used the 'correct' value as reported by web100. do_socketbuffer.pl All tests were run with the following static settings: Client: pc55. tcp_rmem; 4096 131072 8388608 & tcp_wmem; 4096 131072 8388608 Server: pc56. tcp_wmem: 4096 131072 8388608 & tcp_wmem; 4096 131072 8388608 iperf transfers were run for 5 seconds (as we're not monitoring throughput). web100 traps were set at a frequency of 0.1 seconds (100ms). RUN 1 server window set to: 8k, 16k, 32k, 64k, 128k, 256k, 512k, 1024k, 2048k, 4096k, 8192k. with sender (client) window set to only 64k. Monitor effects of the sender window at the client side with web100. Background doesn't matter as i'm only monitoring the values that tcp should set anyway and should not be load dependent. Note that the server also reports that the socket buffer size on the reciever was set to a value double what was entered. So the variable we're interested in is actually CurrentRwinRcvd (blue on all graphs). It also seems to have a max value of 2megs (approx). The real value is 2096704 which is 1.9996mbytes. the 1024k one looks a bit odd - i'll have to do that one again. For all windows below 1024k, the value of the set rwin seems to be one and half times more that that requested. Iperf states that it's actually twice that requested. The variations in the Rwin value is most probably due to the 2.4 kernel autotuning. It might also be worth investigating into the steepness of the rwin curve as this would have an impact on the performance of tcp when the connection first opens. Anyway, here's the 1024k one again. Looks fine - i think i must have typed something wrong on the one above. Again, the value detected by web100 for the rwin appears to be 1.5 times the value requested (1542112-1571072)bytes, ie (1.4706-1.4983)mbytes. It would be interesting to see under what circumstances the windows change. RUN 2 This will investigate to see if there is any added latency in achieving a large window for different window sizes. It appears that from the graphs above, there is a small period just after the connection to obtain a 2meg window. I will set server windows of {1m,1.5m,2m,2.5m,3m,3.5m,4m,4.5,5m,5.5m,6m,6.5m,7m,7.5m,8m} and again monitor for 5 seconds, but this time with a higher trap rate of 10ms. Not very interesting. There doesn't seem to be much variation nor an initial period by which it takes for the rwin to settle. This makes sense really as the values are negotiated at the three way handshake. But at least it's now certain that the maximum rwin settable by iperf (other tcp?) is 2 megs. Conclusion Iperf does not correctly set the server window. It is actually 1.5 times the value requested (as opposed to twice as reported by iperf). It also has a maximum bound of 2 megs. This is reached with a 1.33meg requested window (1.33 times 1.5 = 2meg). Any requested value above 1.33meg does not have any effect on setting the window size above 2megs. Tue, 9 July, 2002 0:36 � 2001-2003, Yee-Ting Li, email: ytl@hep.ucl.ac.uk, Tel: +44 (0) 20 7679 1376, Fax: +44 (0) 20 7679 7145 Room D14, High Energy Particle Physics, Dept. of Physics & Astronomy, UCL, Gower St, London, WC1E 6BT