Re: [L4s-discuss] Irregularities in results from L4S testing
rjmcmahon <rjmcmahon@rjmcmahon.com> Wed, 06 September 2023 05:04 UTC
Return-Path: <rjmcmahon@rjmcmahon.com>
X-Original-To: l4s-discuss@ietfa.amsl.com
Delivered-To: l4s-discuss@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 0C9C0C14CE39 for <l4s-discuss@ietfa.amsl.com>; Tue, 5 Sep 2023 22:04:52 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.105
X-Spam-Level:
X-Spam-Status: No, score=-7.105 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HK_RANDOM_ENVFROM=0.001, HK_RANDOM_FROM=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=rjmcmahon.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id m1kla0hwKoe2 for <l4s-discuss@ietfa.amsl.com>; Tue, 5 Sep 2023 22:04:47 -0700 (PDT)
Received: from bobcat.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E384FC14CF1B for <l4s-discuss@ietf.org>; Tue, 5 Sep 2023 22:04:47 -0700 (PDT)
Received: from mail.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) by bobcat.rjmcmahon.com (Postfix) with ESMTPA id 50EBE1B25F; Tue, 5 Sep 2023 22:04:47 -0700 (PDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 bobcat.rjmcmahon.com 50EBE1B25F
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rjmcmahon.com; s=bobcat; t=1693976687; bh=XrxcxHW6HsE5Jx27AdF21Co5+Xyj6zW/UsFCukZltkQ=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=oUHUucCaapTbTaZ/2sDrcDcmG/SImExw4BW0FvJYyRMbH/ggcb0g3x55zMjYRxpSz +up/rriSUICf0s3WIdxLMAHN+233JlMu0r3E4wAc4rTuQmrIlIOhVRvoAxsyYC7mVt wyeoRoyMyAqj5iHVmC+WvmbgzvZFu67cXSTJrTY0=
MIME-Version: 1.0
Date: Tue, 05 Sep 2023 22:04:47 -0700
From: rjmcmahon <rjmcmahon@rjmcmahon.com>
To: rjmcmahon <rjmcmahon=40rjmcmahon.com@dmarc.ietf.org>
Cc: Matteo Guarna S303434 <matteo.guarna@studenti.polito.it>, l4s-discuss@ietf.org
In-Reply-To: <6773d3a37a68c7142907e364ce9881a2@rjmcmahon.com>
References: <181d20c79294d87ca7e3c4a398457cb2@studenti.polito.it> <6460de788e810fc720d196304e9cd228@studenti.polito.it> <8eea45b1cd4862fa731babeab073fc74@studenti.polito.it> <6773d3a37a68c7142907e364ce9881a2@rjmcmahon.com>
Message-ID: <32b2d394ca925cf9ebc6f3d67bf44eb5@rjmcmahon.com>
X-Sender: rjmcmahon@rjmcmahon.com
Content-Type: text/plain; charset="US-ASCII"; format="flowed"
Content-Transfer-Encoding: 7bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/l4s-discuss/0xySPvt92rI-W-B429W5CQsG71A>
Subject: Re: [L4s-discuss] Irregularities in results from L4S testing
X-BeenThere: l4s-discuss@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: "Low Latency, Low Loss, Scalable Throughput \(L4S\) " <l4s-discuss.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/l4s-discuss>, <mailto:l4s-discuss-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/l4s-discuss/>
List-Post: <mailto:l4s-discuss@ietf.org>
List-Help: <mailto:l4s-discuss-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/l4s-discuss>, <mailto:l4s-discuss-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 06 Sep 2023 05:04:52 -0000
Also, if one only wants to monitor a single number, network power can be used. Here's a cubic & prague run, no L4S ECN forwarding plane. It seems prague outperforms cubic w/respect to network power because of the slightly lower write to read msg latency. This is just two Rpi4s on the same switch. root@raspberrypi:/usr/local/src/iperf2-code# iperf -s -i 1 -e ------------------------------------------------------------ Server listening on TCP port 5001 with pid 5418 Read buffer size: 128 KByte (Dist bin width=16.0 KByte) TCP congestion control default cubic TCP window size: 128 KByte (default) ------------------------------------------------------------ [ 1] local 192.168.1.32%eth0 port 5001 connected with 192.168.1.33 port 55328 (trip-times) (sock=4/cubic) (peer 2.1.10-master) (icwnd/mss/irtt=14/1448/216) on 2023-09-05 21:57:50.882 (PDT) [ ID] Interval Transfer Bandwidth Burst Latency avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist [ 1] 0.00-1.00 sec 11.9 MBytes 100 Mbits/sec 12.962/9.365/13.121/0.386 ms (95/131682) 161 KByte 965 3406=3405:1:0:0:0:0:0:0 [ 1] 1.00-2.00 sec 23.7 MBytes 199 Mbits/sec 7.093/6.628/12.952/0.533 ms (190/130812) 170 KByte 3504 7118=7114:4:0:0:0:0:0:0 [ 1] 2.00-3.00 sec 35.7 MBytes 299 Mbits/sec 5.385/4.284/7.023/0.191 ms (285/131168) 196 KByte 6942 11437=11431:6:0:0:0:0:0:0 [ 1] 3.00-4.00 sec 47.6 MBytes 400 Mbits/sec 4.752/3.421/5.505/0.357 ms (381/131100) 232 KByte 10510 2520=1512:13:995:0:0:0:0:0 [ 1] 4.00-5.00 sec 59.5 MBytes 499 Mbits/sec 3.048/2.853/5.050/0.181 ms (476/131100) 186 KByte 20475 1782=827:3:11:941:0:0:0:0 [ 1] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 3.965/2.918/4.632/0.417 ms (572/131071) 290 KByte 18910 23200=22675:54:52:419:0:0:0:0 [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec 3.453/2.618/4.159/0.262 ms (667/131089) 295 KByte 25319 44522=44521:1:0:0:0:0:0:0 [ 1] 7.00-8.00 sec 95.3 MBytes 800 Mbits/sec 3.000/2.289/3.525/0.218 ms (763/131003) 293 KByte 33318 50682=50680:2:0:0:0:0:0:0 [ 1] 8.00-9.00 sec 107 MBytes 900 Mbits/sec 2.647/2.034/3.088/0.185 ms (858/131061) 291 KByte 42475 56994=56993:1:0:0:0:0:0:0 [ 1] 9.00-10.00 sec 111 MBytes 929 Mbits/sec 2.326/2.019/2.735/0.093 ms (885/131160) 264 KByte 49903 58385=58382:2:1:0:0:0:0:0 [ 1] 10.00-11.00 sec 111 MBytes 929 Mbits/sec 2.338/1.843/2.642/0.084 ms (886/131044) 265 KByte 49658 58611=58609:1:1:0:0:0:0:0 [ 1] 11.00-12.00 sec 111 MBytes 928 Mbits/sec 2.387/2.139/2.495/0.071 ms (885/131142) 271 KByte 48625 58583=58583:0:0:0:0:0:0:0 [ 1] 12.00-13.00 sec 109 MBytes 917 Mbits/sec 2.591/2.000/3.075/0.299 ms (875/131000) 290 KByte 44245 57699=57699:0:0:0:0:0:0:0 [ 1] 13.00-14.00 sec 109 MBytes 912 Mbits/sec 2.728/2.024/3.277/0.280 ms (870/130997) 304 KByte 41772 55068=55013:6:4:9:11:1:0:24 [ 1] 14.00-15.00 sec 109 MBytes 915 Mbits/sec 2.900/2.087/3.540/0.264 ms (872/131143) 324 KByte 39440 35153=34696:1:0:113:107:0:0:236 [ 1] 0.00-15.00 sec 1.17 GBytes 668 Mbits/sec 3.129/1.843/13.121/1.370 ms (9563/131072) 298 KByte 26697 525326=522306:95:1064:1482:118:1:0:260 [ 2] local 192.168.1.32%eth0 port 5001 connected with 192.168.1.33 port 56498 (trip-times) (sock=5/prague) (peer 2.1.10-master) (icwnd/mss/irtt=14/1448/189) on 2023-09-05 21:58:08.514 (PDT) [ ID] Interval Transfer Bandwidth Burst Latency avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist [ 2] 0.00-1.00 sec 11.9 MBytes 99.9 Mbits/sec 11.870/10.531/12.112/0.154 ms (95/131510) 147 KByte 1052 5240=5240:0:0:0:0:0:0:0 [ 2] 1.00-2.00 sec 23.7 MBytes 198 Mbits/sec 6.602/6.376/11.912/0.514 ms (189/131224) 159 KByte 3757 8896=8896:0:0:0:0:0:0:0 [ 2] 2.00-3.00 sec 35.7 MBytes 300 Mbits/sec 4.707/4.613/6.539/0.123 ms (286/130912) 171 KByte 7955 14637=14637:0:0:0:0:0:0:0 [ 2] 3.00-4.00 sec 47.6 MBytes 400 Mbits/sec 3.772/3.705/4.678/0.062 ms (381/131085) 184 KByte 13240 21490=21486:3:1:0:0:0:0:0 [ 2] 4.00-5.00 sec 59.6 MBytes 500 Mbits/sec 3.274/3.129/3.777/0.054 ms (476/131190) 200 KByte 19072 30380=30376:3:1:0:0:0:0:0 [ 2] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 2.877/2.838/3.342/0.033 ms (572/131042) 210 KByte 26051 38172=38163:8:1:0:0:0:0:0 [ 2] 6.00-7.00 sec 83.4 MBytes 700 Mbits/sec 2.467/2.421/2.867/0.033 ms (667/131111) 211 KByte 35449 37429=36701:727:1:0:0:0:0:0 [ 2] 7.00-8.00 sec 95.3 MBytes 800 Mbits/sec 2.489/2.236/2.940/0.058 ms (763/131007) 243 KByte 40166 39746=38688:1005:49:3:1:0:0:0 [ 2] 8.00-9.00 sec 107 MBytes 900 Mbits/sec 2.203/2.083/2.495/0.027 ms (858/131077) 242 KByte 51060 56449=56424:23:1:0:0:1:0:0 [ 2] 9.00-10.00 sec 112 MBytes 937 Mbits/sec 2.047/1.899/2.242/0.102 ms (894/131054) 234 KByte 57230 58373=58368:3:1:0:0:1:0:0 [ 2] 10.00-11.00 sec 111 MBytes 931 Mbits/sec 2.269/1.841/2.544/0.150 ms (888/131079) 258 KByte 51298 57990=57987:3:0:0:0:0:0:0 [ 2] 11.00-12.00 sec 110 MBytes 925 Mbits/sec 2.355/2.086/2.556/0.062 ms (882/131083) 266 KByte 49090 57727=57726:1:0:0:0:0:0:0 [ 2] 12.00-13.00 sec 110 MBytes 919 Mbits/sec 2.265/2.060/2.517/0.029 ms (877/131046) 254 KByte 50740 57241=57240:1:0:0:0:0:0:0 [ 2] 13.00-14.00 sec 109 MBytes 915 Mbits/sec 2.469/2.032/2.830/0.203 ms (873/131062) 276 KByte 46346 56947=56946:1:0:0:0:0:0:0 [ 2] 14.00-15.00 sec 109 MBytes 917 Mbits/sec 2.490/1.292/3.128/0.283 ms (874/131086) 279 KByte 46013 59386=59382:3:0:1:0:0:0:0 [ 2] 0.00-15.00 sec 1.17 GBytes 669 Mbits/sec 2.719/1.292/12.112/1.210 ms (9578/131072) 302 KByte 30772 600299=598456:1781:55:4:1:2:0:0 root@raspberrypi:/usr/local/src/iperf2-code# iperf -c 192.168.1.32 -i 1 --trip-times --fq-rate-step 100m --tcp-cca cubic -t 15 ------------------------------------------------------------ Client connecting to 192.168.1.32, TCP port 5001 with pid 6055 (1/0 flows/load) Write buffer size: 131072 Byte fair-queue socket pacing set to 100 Mbit/s (stepping rate by 100 Mbit/s) TCP congestion control set to cubic using cubic TOS set to 0x0 (Nagle on) TCP window size: 119 KByte (default) Event based writes (pending queue watermark at 16384 bytes) ------------------------------------------------------------ [ 1] local 192.168.1.33%eth0 port 55328 connected with 192.168.1.32 port 5001 (prefetch=16384) (trip-times) (sock=3/cubic) (icwnd/mss/irtt=14/1448/254) (ct=0.41 ms) on 2023-09-05 21:57:50.882 (PDT) [ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd/RTT(var) fq-rate NetPwr [ 1] 0.00-1.00 sec 12.0 MBytes 101 Mbits/sec 96/0 0 56K/309(175) us 100 Mbit/sec 40722 [ 1] 1.00-2.00 sec 23.8 MBytes 199 Mbits/sec 190/0 0 108K/274(47) us 200 Mbit/sec 90889 [ 1] 2.00-3.00 sec 35.6 MBytes 299 Mbits/sec 285/0 0 152K/395(111) us 300 Mbit/sec 94571 [ 1] 3.00-4.00 sec 47.8 MBytes 401 Mbits/sec 382/0 0 210K/415(100) us 400 Mbit/sec 120649 [ 1] 4.00-5.00 sec 59.5 MBytes 499 Mbits/sec 476/0 0 210K/420(113) us 500 Mbit/sec 148548 [ 1] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 572/0 0 257K/354(99) us 600 Mbit/sec 211789 [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec 667/0 0 257K/372(116) us 700 Mbit/sec 235014 [ 1] 7.00-8.00 sec 95.4 MBytes 800 Mbits/sec 763/0 0 257K/358(106) us 800 Mbit/sec 279352 [ 1] 8.00-9.00 sec 107 MBytes 900 Mbits/sec 858/0 0 257K/383(134) us 900 Mbit/sec 293629 [ 1] 9.00-10.00 sec 111 MBytes 928 Mbits/sec 885/0 0 257K/1048(97) us 1.00 Gbit/sec 110686 [ 1] 10.00-11.00 sec 111 MBytes 929 Mbits/sec 886/0 0 257K/1027(80) us 1.10 Gbit/sec 113077 [ 1] 11.00-12.00 sec 111 MBytes 929 Mbits/sec 886/0 0 257K/1030(72) us 1.20 Gbit/sec 112747 [ 1] 12.00-13.00 sec 109 MBytes 916 Mbits/sec 874/0 0 257K/1374(67) us 1.30 Gbit/sec 83375 [ 1] 13.00-14.00 sec 109 MBytes 912 Mbits/sec 870/0 0 394K/1474(149) us 1.40 Gbit/sec 77363 [ 1] 14.00-15.00 sec 109 MBytes 914 Mbits/sec 872/0 0 394K/1592(88) us 1.50 Gbit/sec 71793 [ 1] 0.00-15.01 sec 1.17 GBytes 668 Mbits/sec 9563/0 0 394K/1660(205) us 50294 root@raspberrypi:/usr/local/src/iperf2-code# iperf -c 192.168.1.32 -i 1 --trip-times --fq-rate-step 100m --tcp-cca prague -t 15 ------------------------------------------------------------ Client connecting to 192.168.1.32, TCP port 5001 with pid 6058 (1/0 flows/load) Write buffer size: 131072 Byte fair-queue socket pacing set to 100 Mbit/s (stepping rate by 100 Mbit/s) TCP congestion control set to prague using prague TOS set to 0x0 (Nagle on) TCP window size: 85.0 KByte (default) Event based writes (pending queue watermark at 16384 bytes) ------------------------------------------------------------ [ 1] local 192.168.1.33%eth0 port 56498 connected with 192.168.1.32 port 5001 (prefetch=16384) (trip-times) (sock=3/prague) (icwnd/mss/irtt=14/1448/159) (ct=0.31 ms) on 2023-09-05 21:58:08.514 (PDT) [ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd/RTT(var) fq-rate NetPwr [ 1] 0.00-1.00 sec 12.0 MBytes 101 Mbits/sec 96/0 0 28K/122(2) us 100 Mbit/sec 103140 [ 1] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec 189/0 0 67K/179(72) us 200 Mbit/sec 138394 [ 1] 2.00-3.00 sec 35.8 MBytes 300 Mbits/sec 286/0 0 103K/165(37) us 300 Mbit/sec 227191 [ 1] 3.00-4.00 sec 47.6 MBytes 400 Mbits/sec 381/0 0 181K/149(23) us 400 Mbit/sec 335157 [ 1] 4.00-5.00 sec 59.6 MBytes 500 Mbits/sec 477/0 0 236K/161(29) us 500 Mbit/sec 388331 [ 1] 5.00-6.00 sec 71.5 MBytes 600 Mbits/sec 572/0 0 261K/178(29) us 600 Mbit/sec 421198 [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec 667/0 0 261K/196(40) us 700 Mbit/sec 446046 [ 1] 7.00-8.00 sec 95.4 MBytes 800 Mbits/sec 763/0 0 349K/213(43) us 800 Mbit/sec 469521 [ 1] 8.00-9.00 sec 107 MBytes 900 Mbits/sec 858/0 0 418K/210(33) us 900 Mbit/sec 535523 [ 1] 9.00-10.00 sec 112 MBytes 937 Mbits/sec 894/0 0 418K/893(66) us 1.00 Gbit/sec 131219 [ 1] 10.00-11.00 sec 111 MBytes 931 Mbits/sec 888/0 0 418K/1001(57) us 1.10 Gbit/sec 116276 [ 1] 11.00-12.00 sec 110 MBytes 925 Mbits/sec 882/0 0 418K/1060(45) us 1.20 Gbit/sec 109062 [ 1] 12.00-13.00 sec 110 MBytes 920 Mbits/sec 877/0 0 418K/1236(66) us 1.30 Gbit/sec 93002 [ 1] 13.00-14.00 sec 109 MBytes 915 Mbits/sec 873/0 0 434K/1442(73) us 1.40 Gbit/sec 79352 [ 1] 14.00-15.00 sec 109 MBytes 916 Mbits/sec 874/0 1 267K/1427(116) us 1.50 Gbit/sec 80278 [ 1] 0.00-15.01 sec 1.17 GBytes 669 Mbits/sec 9578/0 1 267K/1357(84) us 61631 Bob > Off topic, but iperf 2 provides latency information and queue depth > (inP) via Little's law. Capacity info only I find as insufficient. > > One can see bloat occur on both client & server at the 9-10 sec > interval below. (Use --histograms for full data vs the central limit > theorem (CLT) averaging.) > > > [rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.32 --fq-rate > 100m -i 1 -e --trip-times --fq-rate-step 100m -t 15 > ------------------------------------------------------------ > Client connecting to 192.168.1.32, TCP port 5001 with pid 38101 (1/0 > flows/load) > Write buffer size: 131072 Byte > fair-queue socket pacing set to 100 Mbit/s (stepping rate by 100 > Mbit/s) > TCP congestion control using cubic > TOS set to 0x0 (Nagle on) > TCP window size: 85.0 KByte (default) > Event based writes (pending queue watermark at 16384 bytes) > ------------------------------------------------------------ > [ 1] local 192.168.1.103%enp4s0 port 57908 connected with > 192.168.1.32 port 5001 (prefetch=16384) (trip-times) (sock=3) > (icwnd/mss/irtt=14/1448/183) (ct=0.25 ms) on 2023-09-05 19:43:15.600 > (PDT) > [ ID] Interval Transfer Bandwidth Write/Err Rtry > Cwnd/RTT(var) fq-rate NetPwr > [ 1] 0.00-1.00 sec 12.1 MBytes 102 Mbits/sec 97/0 0 > 419K/498(150) us 100 Mbit/sec25530 > [ 1] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec 189/0 0 > 419K/621(620) us 200 Mbit/sec39891 > [ 1] 2.00-3.00 sec 35.5 MBytes 298 Mbits/sec 284/0 0 > 419K/648(436) us 300 Mbit/sec57445 > [ 1] 3.00-4.00 sec 47.6 MBytes 400 Mbits/sec 381/0 0 > 419K/477(141) us 400 Mbit/sec104693 > [ 1] 4.00-5.00 sec 59.5 MBytes 499 Mbits/sec 476/0 0 > 419K/590(241) us 500 Mbit/sec105746 > [ 1] 5.00-6.00 sec 71.4 MBytes 599 Mbits/sec 571/0 0 > 419K/548(123) us 600 Mbit/sec136573 > [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec 667/0 0 > 419K/560(199) us 700 Mbit/sec156116 > [ 1] 7.00-8.00 sec 95.4 MBytes 800 Mbits/sec 763/0 0 > 419K/465(134) us 800 Mbit/sec215071 > [ 1] 8.00-9.00 sec 107 MBytes 899 Mbits/sec 857/0 0 > 419K/376(153) us 900 Mbit/sec298747 > [ 1] 9.00-10.00 sec 114 MBytes 952 Mbits/sec 908/0 65 > 1539K/13165(122) us 1.00 Gbit/sec9040 > [ 1] 10.00-11.00 sec 112 MBytes 944 Mbits/sec 900/0 0 > 1681K/14367(120) us 1.10 Gbit/sec8211 > [ 1] 11.00-12.00 sec 112 MBytes 942 Mbits/sec 898/0 0 > 1791K/15265(65) us 1.20 Gbit/sec7711 > [ 1] 12.00-13.00 sec 112 MBytes 943 Mbits/sec 899/0 0 > 1879K/16038(133) us 1.30 Gbit/sec7347 > [ 1] 13.00-14.00 sec 112 MBytes 937 Mbits/sec 894/0 1 > 1361K/11505(122) us 1.40 Gbit/sec10185 > [ 1] 14.00-15.00 sec 112 MBytes 942 Mbits/sec 898/0 0 > 1456K/12278(94) us 1.50 Gbit/sec9586 > [ 1] 0.00-15.02 sec 1.18 GBytes 676 Mbits/sec 9683/0 66 > 1457K/12545(108) us 0.000 bit/sec6736 > > root@raspberrypi:/usr/local/src/iperf2-code# iperf -s -i 1 -e > ------------------------------------------------------------ > Server listening on TCP port 5001 with pid 5138 > Read buffer size: 128 KByte (Dist bin width=16.0 KByte) > TCP congestion control default cubic > TCP window size: 128 KByte (default) > ------------------------------------------------------------ > [ 1] local 192.168.1.32%eth0 port 5001 connected with 192.168.1.103 > port 57908 (trip-times) (sock=4) (peer 2.1.10-master) > (icwnd/mss/irtt=14/1448/157) on 2023-09-05 19:43:15.601 (PDT) > [ ID] Interval Transfer Bandwidth Burst Latency > avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist > [ 1] 0.00-1.00 sec 11.9 MBytes 100 Mbits/sec > 24.929/9.939/29.771/3.530 ms (95/131703) 314 KByte 502 > 2352=2300:45:3:4:0:0:0:0 > [ 1] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec > 12.855/8.090/26.138/2.290 ms (188/131414) 306 KByte 1922 > 6269=6261:8:0:0:0:0:0:0 > [ 1] 2.00-3.00 sec 35.5 MBytes 298 Mbits/sec > 8.420/5.288/13.054/1.076 ms (285/130718) 304 KByte 4424 > 11051=11045:6:0:0:0:0:0:0 > [ 1] 3.00-4.00 sec 47.6 MBytes 399 Mbits/sec > 6.262/3.938/8.724/0.766 ms (380/131219) 305 KByte 7963 > 15944=15942:2:0:0:0:0:0:0 > [ 1] 4.00-5.00 sec 59.5 MBytes 499 Mbits/sec > 4.995/3.115/6.492/0.600 ms (476/131061) 304 KByte 12490 > 30047=30046:1:0:0:0:0:0:0 > [ 1] 5.00-6.00 sec 71.4 MBytes 599 Mbits/sec > 4.240/3.438/5.199/0.251 ms (571/131164) 310 KByte 17664 > 22126=21624:2:3:497:0:0:0:0 > [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec > 3.714/3.018/4.399/0.208 ms (668/130881) 316 KByte 23540 > 2002=668:0:0:1334:0:0:0:0 > [ 1] 7.00-8.00 sec 95.3 MBytes 800 Mbits/sec > 3.143/2.520/3.888/0.185 ms (762/131191) 307 KByte 31806 > 50364=50349:0:0:15:0:0:0:0 > [ 1] 8.00-9.00 sec 107 MBytes 900 Mbits/sec > 2.783/2.223/3.222/0.164 ms (858/131050) 305 KByte 40398 > 57331=57330:1:0:0:0:0:0:0 > [ 1] 9.00-10.00 sec 112 MBytes 941 Mbits/sec > 12.503/2.421/34.417/4.081 ms (898/131036) 1.42 MByte 9411 > 59761=59732:3:2:3:1:2:1:17 > [ 1] 10.00-11.00 sec 112 MBytes 941 Mbits/sec > 14.793/13.931/15.672/0.381 ms (898/131051) 1.66 MByte 7955 > 62269=62268:1:0:0:0:0:0:0 > [ 1] 11.00-12.00 sec 112 MBytes 941 Mbits/sec > 15.887/15.140/16.612/0.321 ms (898/131053) 1.78 MByte 7408 > 62362=62361:1:0:0:0:0:0:0 > [ 1] 12.00-13.00 sec 112 MBytes 941 Mbits/sec > 16.753/16.106/17.360/0.267 ms (897/131198) 1.88 MByte 7025 > 62345=62343:2:0:0:0:0:0:0 > [ 1] 13.00-14.00 sec 112 MBytes 941 Mbits/sec > 17.225/7.698/34.284/2.035 ms (898/131052) 1.92 MByte 6832 > 61245=61225:3:0:0:0:0:0:17 > [ 1] 14.00-15.00 sec 112 MBytes 941 Mbits/sec > 13.048/12.377/13.694/0.279 ms (898/131054) 1.47 MByte 9020 > 62314=62314:0:0:0:0:0:0:0 > [ 1] 0.00-15.01 sec 1.18 GBytes 676 Mbits/sec > 10.616/2.223/34.417/5.913 ms (9683/131072) 1.17 MByte 7963 > 568650=566676:75:8:1853:1:2:1:34 > > > Bob >> Greetings, >> >> I am currently conducting research for my thesis on the L4S suite. As >> part of my research, I have deployed a simple testbed to examine the >> behavior of a TCP-Prague data flow when deployed in conjunction with a >> TCP-Cubic flow on a congested, L4S-capable network link. >> >> For the network configuration and host setup, I have utilized the >> source code available at the following GitHub repository: >> >> https://github.com/L4STeam/linux/tree/testing . >> >> While my initial tests have been generally successful, I have >> encountered two specific issues: >> >> 1 - The TCP-Prague and TCP-Cubic flows do not seem to equitably share >> available bandwidth. In some cases, TCP-Prague appears to consume up >> to two/thirds of the channel's capacity. >> >> 2 - Upon inspecting the packets to assess the correctness of the >> protocol's behavior, I observed that the proportion of TCP-Prague ACK >> packets with the TCP ECE (Explicit Congestion Notification Echo) flag >> is approximately 5/9 of the total ACK packets, while the percentage of >> transmitted packets marked by the queue remains consistently at around >> 8%. While I did not anticipate a strict 1:1 ratio, this result still >> strikes me as peculiar. >> >> Additionally, I noted that all TCP-Prague packets inherently carry the >> CWR (Congestion Window Reduced) flag by default. It appears to me (if >> I'm not mistaken) to be an intentional implementation choice by the >> developers, as detailed in the source code found at >> https://github.com/L4STeam/linux/blob/testing/net/ipv4/tcp_prague.c , >> specifically beginning at line 53. I would appreciate confirmation >> regarding this implementation choice and assurance that it does not >> adversely impact its compliance with the standard. >> >> I extend my sincere gratitude in advance to anyone who can provide >> insights or assistance with these matters. >> >> Best regards, >> >> Matteo Guarna >> >> >> P.s. >> >> I someone wants a more detailed insight regarding my testplant and my >> results, I had tried to ask for help by opening an issue on the >> repository's github, where I provided a more in-depth description of >> my experiment. The conversation is available here: >> >> https://github.com/L4STeam/linux/issues/21
- [L4s-discuss] Irregularities in results from L4S … Matteo Guarna S303434
- Re: [L4s-discuss] Irregularities in results from … Vidhi Goel
- Re: [L4s-discuss] Irregularities in results from … rjmcmahon
- Re: [L4s-discuss] Irregularities in results from … rjmcmahon
- Re: [L4s-discuss] Irregularities in results from … Sebastian Moeller
- Re: [L4s-discuss] Irregularities in results from … Koen De Schepper (Nokia)
- Re: [L4s-discuss] Irregularities in results from … Sebastian Moeller
- Re: [L4s-discuss] Irregularities in results from … Matteo Guarna S303434
- Re: [L4s-discuss] Irregularities in results from … Matteo Guarna S303434
- Re: [L4s-discuss] Irregularities in results from … Sebastian Moeller