Why Transfer
data in summary is different between sender and receiver when testing tcp ?
#1541
-
When tuning a TCP connection, Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 524288000 bytes to send, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 258 MBytes 2.16 Gbits/sec 0 405 KBytes
[ 5] 1.00-2.00 sec 12.0 MBytes 101 Mbits/sec 0 405 KBytes
[ 5] 2.00-3.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 3.00-4.00 sec 11.2 MBytes 94.1 Mbits/sec 0 405 KBytes
[ 5] 4.00-5.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 5.00-6.00 sec 12.0 MBytes 101 Mbits/sec 0 405 KBytes
[ 5] 6.00-7.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 7.00-8.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 8.00-9.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 9.00-10.00 sec 12.0 MBytes 101 Mbits/sec 0 405 KBytes
[ 5] 10.00-11.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 11.00-12.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 12.00-13.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 13.00-14.00 sec 12.0 MBytes 101 Mbits/sec 0 405 KBytes
[ 5] 14.00-15.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 15.00-16.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 16.00-17.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 17.00-18.00 sec 12.0 MBytes 101 Mbits/sec 0 405 KBytes
[ 5] 18.00-19.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 19.00-20.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 20.00-21.00 sec 11.2 MBytes 93.6 Mbits/sec 0 405 KBytes
[ 5] 21.00-22.00 sec 12.0 MBytes 101 Mbits/sec 0 405 KBytes
[ 5] 22.00-22.29 sec 3.43 MBytes 101 Mbits/sec 0 405 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-22.29 sec 501 MBytes 188 Mbits/sec 0 sender
[ 5] 0.00-22.34 sec 498 MBytes 187 Mbits/sec receiver
CPU Utilization: local/sender 1.2% (0.0%u/1.2%s), remote/receiver 0.1% (0.0%u/0.1%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubic |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
@cncal, what is the iperf3 version you are using ( In general, the difference between the sender and the receiver is because some of the sent messages may still on the way to the receiver when the test ends. The sender measures the data that is buffered for sending, not the actually transferred data. I assume that in your case the output is from the client and that the client is the sender. You can see that the in the first second 258MB were "sent", although the throughput is about 11-12MB/sec. That means that some of the sent data is buffered and is not actually sent. Data may also be buffered in different components on the was (switches, routers, etc.). What I don't understand is the "22.00-22.29 sec 3.43 MBytes". If the test was for 22 seconds ( |
Beta Was this translation helpful? Give feedback.
-
@cncal, thanks for clarifying. I now understand the "22.00-22.29 sec 3.43 MBytes" - client terminated per number of of packets sent ( With this clarification, the answer is as I explained above: the client ends the test after "buffering" (sending) 500M packets, and sends a control message to the server indicating that the test ended. At this point, about 3MB still did not reach yet the server (e.g. because of network delays) and this is the reason for the difference. |
Beta Was this translation helpful? Give feedback.
-
Currently there is no option that allows the receiver to wait until all packets are received. I the past I submitted PR #1071 that allows this, but probably it was not merged into mainline as it is too complex and/or requires too much testing. (By the way, note that version 3.7 that you are using is quite old.) |
Beta Was this translation helpful? Give feedback.
-
In my case my command line was even more barebone, just:
I also wondered about the sender / receiver differences:
@davidBar-On wrote:
|
Beta Was this translation helpful? Give feedback.
@cncal, thanks for clarifying. I now understand the "22.00-22.29 sec 3.43 MBytes" - client terminated per number of of packets sent (
-n
) and not per test time.With this clarification, the answer is as I explained above: the client ends the test after "buffering" (sending) 500M packets, and sends a control message to the server indicating that the test ended. At this point, about 3MB still did not reach yet the server (e.g. because of network delays) and this is the reason for the difference.