Re: [L4s-discuss] Irregularities in results from L4S testing
rjmcmahon <rjmcmahon@rjmcmahon.com> Wed, 06 September 2023 02:48 UTC
Return-Path: <rjmcmahon@rjmcmahon.com>
X-Original-To: l4s-discuss@ietfa.amsl.com
Delivered-To: l4s-discuss@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 3E588C14CE42 for <l4s-discuss@ietfa.amsl.com>; Tue, 5 Sep 2023 19:48:46 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -7.104
X-Spam-Level:
X-Spam-Status: No, score=-7.104 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HK_RANDOM_ENVFROM=0.001, HK_RANDOM_FROM=0.001, RCVD_IN_DNSWL_HI=-5, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (1024-bit key) header.d=rjmcmahon.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 4gjnlDz-4c-E for <l4s-discuss@ietfa.amsl.com>; Tue, 5 Sep 2023 19:48:41 -0700 (PDT)
Received: from bobcat.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 4364EC14CE3F for <l4s-discuss@ietf.org>; Tue, 5 Sep 2023 19:48:41 -0700 (PDT)
Received: from mail.rjmcmahon.com (bobcat.rjmcmahon.com [45.33.58.123]) by bobcat.rjmcmahon.com (Postfix) with ESMTPA id 691B91B25F; Tue, 5 Sep 2023 19:48:40 -0700 (PDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 bobcat.rjmcmahon.com 691B91B25F
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rjmcmahon.com; s=bobcat; t=1693968520; bh=NYRJ3eqXbbWwZ+kYLtjm6KfG6VOjB81CXH4aI1mnkyU=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=KIXnAtH2Rbk83fAEAZWyBPWElW0Ob92s/yBe597/3dIK5qUhzenKIJiJ3D29hjxtJ 8+zDy4enOnTwxdbtXtE6hengZ7Mwf7Ejw46WQ1tee1Z/eAv0+m6/dXymsO32KEMQAi uWtU2WNysItFk9buQ9rTbP/YSkmepnuj6my/bxSw=
MIME-Version: 1.0
Date: Tue, 05 Sep 2023 19:48:40 -0700
From: rjmcmahon <rjmcmahon@rjmcmahon.com>
To: Matteo Guarna S303434 <matteo.guarna@studenti.polito.it>
Cc: l4s-discuss@ietf.org
In-Reply-To: <8eea45b1cd4862fa731babeab073fc74@studenti.polito.it>
References: <181d20c79294d87ca7e3c4a398457cb2@studenti.polito.it> <6460de788e810fc720d196304e9cd228@studenti.polito.it> <8eea45b1cd4862fa731babeab073fc74@studenti.polito.it>
Message-ID: <6773d3a37a68c7142907e364ce9881a2@rjmcmahon.com>
X-Sender: rjmcmahon@rjmcmahon.com
Content-Type: text/plain; charset="US-ASCII"; format="flowed"
Content-Transfer-Encoding: 7bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/l4s-discuss/lomzfeL9WqePhsbJTmtDmjDCnio>
Subject: Re: [L4s-discuss] Irregularities in results from L4S testing
X-BeenThere: l4s-discuss@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: "Low Latency, Low Loss, Scalable Throughput \(L4S\) " <l4s-discuss.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/l4s-discuss>, <mailto:l4s-discuss-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/l4s-discuss/>
List-Post: <mailto:l4s-discuss@ietf.org>
List-Help: <mailto:l4s-discuss-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/l4s-discuss>, <mailto:l4s-discuss-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 06 Sep 2023 02:48:46 -0000
Off topic, but iperf 2 provides latency information and queue depth (inP) via Little's law. Capacity info only I find as insufficient. One can see bloat occur on both client & server at the 9-10 sec interval below. (Use --histograms for full data vs the central limit theorem (CLT) averaging.) [rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.32 --fq-rate 100m -i 1 -e --trip-times --fq-rate-step 100m -t 15 ------------------------------------------------------------ Client connecting to 192.168.1.32, TCP port 5001 with pid 38101 (1/0 flows/load) Write buffer size: 131072 Byte fair-queue socket pacing set to 100 Mbit/s (stepping rate by 100 Mbit/s) TCP congestion control using cubic TOS set to 0x0 (Nagle on) TCP window size: 85.0 KByte (default) Event based writes (pending queue watermark at 16384 bytes) ------------------------------------------------------------ [ 1] local 192.168.1.103%enp4s0 port 57908 connected with 192.168.1.32 port 5001 (prefetch=16384) (trip-times) (sock=3) (icwnd/mss/irtt=14/1448/183) (ct=0.25 ms) on 2023-09-05 19:43:15.600 (PDT) [ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd/RTT(var) fq-rate NetPwr [ 1] 0.00-1.00 sec 12.1 MBytes 102 Mbits/sec 97/0 0 419K/498(150) us 100 Mbit/sec25530 [ 1] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec 189/0 0 419K/621(620) us 200 Mbit/sec39891 [ 1] 2.00-3.00 sec 35.5 MBytes 298 Mbits/sec 284/0 0 419K/648(436) us 300 Mbit/sec57445 [ 1] 3.00-4.00 sec 47.6 MBytes 400 Mbits/sec 381/0 0 419K/477(141) us 400 Mbit/sec104693 [ 1] 4.00-5.00 sec 59.5 MBytes 499 Mbits/sec 476/0 0 419K/590(241) us 500 Mbit/sec105746 [ 1] 5.00-6.00 sec 71.4 MBytes 599 Mbits/sec 571/0 0 419K/548(123) us 600 Mbit/sec136573 [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec 667/0 0 419K/560(199) us 700 Mbit/sec156116 [ 1] 7.00-8.00 sec 95.4 MBytes 800 Mbits/sec 763/0 0 419K/465(134) us 800 Mbit/sec215071 [ 1] 8.00-9.00 sec 107 MBytes 899 Mbits/sec 857/0 0 419K/376(153) us 900 Mbit/sec298747 [ 1] 9.00-10.00 sec 114 MBytes 952 Mbits/sec 908/0 65 1539K/13165(122) us 1.00 Gbit/sec9040 [ 1] 10.00-11.00 sec 112 MBytes 944 Mbits/sec 900/0 0 1681K/14367(120) us 1.10 Gbit/sec8211 [ 1] 11.00-12.00 sec 112 MBytes 942 Mbits/sec 898/0 0 1791K/15265(65) us 1.20 Gbit/sec7711 [ 1] 12.00-13.00 sec 112 MBytes 943 Mbits/sec 899/0 0 1879K/16038(133) us 1.30 Gbit/sec7347 [ 1] 13.00-14.00 sec 112 MBytes 937 Mbits/sec 894/0 1 1361K/11505(122) us 1.40 Gbit/sec10185 [ 1] 14.00-15.00 sec 112 MBytes 942 Mbits/sec 898/0 0 1456K/12278(94) us 1.50 Gbit/sec9586 [ 1] 0.00-15.02 sec 1.18 GBytes 676 Mbits/sec 9683/0 66 1457K/12545(108) us 0.000 bit/sec6736 root@raspberrypi:/usr/local/src/iperf2-code# iperf -s -i 1 -e ------------------------------------------------------------ Server listening on TCP port 5001 with pid 5138 Read buffer size: 128 KByte (Dist bin width=16.0 KByte) TCP congestion control default cubic TCP window size: 128 KByte (default) ------------------------------------------------------------ [ 1] local 192.168.1.32%eth0 port 5001 connected with 192.168.1.103 port 57908 (trip-times) (sock=4) (peer 2.1.10-master) (icwnd/mss/irtt=14/1448/157) on 2023-09-05 19:43:15.601 (PDT) [ ID] Interval Transfer Bandwidth Burst Latency avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist [ 1] 0.00-1.00 sec 11.9 MBytes 100 Mbits/sec 24.929/9.939/29.771/3.530 ms (95/131703) 314 KByte 502 2352=2300:45:3:4:0:0:0:0 [ 1] 1.00-2.00 sec 23.6 MBytes 198 Mbits/sec 12.855/8.090/26.138/2.290 ms (188/131414) 306 KByte 1922 6269=6261:8:0:0:0:0:0:0 [ 1] 2.00-3.00 sec 35.5 MBytes 298 Mbits/sec 8.420/5.288/13.054/1.076 ms (285/130718) 304 KByte 4424 11051=11045:6:0:0:0:0:0:0 [ 1] 3.00-4.00 sec 47.6 MBytes 399 Mbits/sec 6.262/3.938/8.724/0.766 ms (380/131219) 305 KByte 7963 15944=15942:2:0:0:0:0:0:0 [ 1] 4.00-5.00 sec 59.5 MBytes 499 Mbits/sec 4.995/3.115/6.492/0.600 ms (476/131061) 304 KByte 12490 30047=30046:1:0:0:0:0:0:0 [ 1] 5.00-6.00 sec 71.4 MBytes 599 Mbits/sec 4.240/3.438/5.199/0.251 ms (571/131164) 310 KByte 17664 22126=21624:2:3:497:0:0:0:0 [ 1] 6.00-7.00 sec 83.4 MBytes 699 Mbits/sec 3.714/3.018/4.399/0.208 ms (668/130881) 316 KByte 23540 2002=668:0:0:1334:0:0:0:0 [ 1] 7.00-8.00 sec 95.3 MBytes 800 Mbits/sec 3.143/2.520/3.888/0.185 ms (762/131191) 307 KByte 31806 50364=50349:0:0:15:0:0:0:0 [ 1] 8.00-9.00 sec 107 MBytes 900 Mbits/sec 2.783/2.223/3.222/0.164 ms (858/131050) 305 KByte 40398 57331=57330:1:0:0:0:0:0:0 [ 1] 9.00-10.00 sec 112 MBytes 941 Mbits/sec 12.503/2.421/34.417/4.081 ms (898/131036) 1.42 MByte 9411 59761=59732:3:2:3:1:2:1:17 [ 1] 10.00-11.00 sec 112 MBytes 941 Mbits/sec 14.793/13.931/15.672/0.381 ms (898/131051) 1.66 MByte 7955 62269=62268:1:0:0:0:0:0:0 [ 1] 11.00-12.00 sec 112 MBytes 941 Mbits/sec 15.887/15.140/16.612/0.321 ms (898/131053) 1.78 MByte 7408 62362=62361:1:0:0:0:0:0:0 [ 1] 12.00-13.00 sec 112 MBytes 941 Mbits/sec 16.753/16.106/17.360/0.267 ms (897/131198) 1.88 MByte 7025 62345=62343:2:0:0:0:0:0:0 [ 1] 13.00-14.00 sec 112 MBytes 941 Mbits/sec 17.225/7.698/34.284/2.035 ms (898/131052) 1.92 MByte 6832 61245=61225:3:0:0:0:0:0:17 [ 1] 14.00-15.00 sec 112 MBytes 941 Mbits/sec 13.048/12.377/13.694/0.279 ms (898/131054) 1.47 MByte 9020 62314=62314:0:0:0:0:0:0:0 [ 1] 0.00-15.01 sec 1.18 GBytes 676 Mbits/sec 10.616/2.223/34.417/5.913 ms (9683/131072) 1.17 MByte 7963 568650=566676:75:8:1853:1:2:1:34 Bob > Greetings, > > I am currently conducting research for my thesis on the L4S suite. As > part of my research, I have deployed a simple testbed to examine the > behavior of a TCP-Prague data flow when deployed in conjunction with a > TCP-Cubic flow on a congested, L4S-capable network link. > > For the network configuration and host setup, I have utilized the > source code available at the following GitHub repository: > > https://github.com/L4STeam/linux/tree/testing . > > While my initial tests have been generally successful, I have > encountered two specific issues: > > 1 - The TCP-Prague and TCP-Cubic flows do not seem to equitably share > available bandwidth. In some cases, TCP-Prague appears to consume up > to two/thirds of the channel's capacity. > > 2 - Upon inspecting the packets to assess the correctness of the > protocol's behavior, I observed that the proportion of TCP-Prague ACK > packets with the TCP ECE (Explicit Congestion Notification Echo) flag > is approximately 5/9 of the total ACK packets, while the percentage of > transmitted packets marked by the queue remains consistently at around > 8%. While I did not anticipate a strict 1:1 ratio, this result still > strikes me as peculiar. > > Additionally, I noted that all TCP-Prague packets inherently carry the > CWR (Congestion Window Reduced) flag by default. It appears to me (if > I'm not mistaken) to be an intentional implementation choice by the > developers, as detailed in the source code found at > https://github.com/L4STeam/linux/blob/testing/net/ipv4/tcp_prague.c , > specifically beginning at line 53. I would appreciate confirmation > regarding this implementation choice and assurance that it does not > adversely impact its compliance with the standard. > > I extend my sincere gratitude in advance to anyone who can provide > insights or assistance with these matters. > > Best regards, > > Matteo Guarna > > > P.s. > > I someone wants a more detailed insight regarding my testplant and my > results, I had tried to ask for help by opening an issue on the > repository's github, where I provided a more in-depth description of > my experiment. The conversation is available here: > > https://github.com/L4STeam/linux/issues/21
- [L4s-discuss] Irregularities in results from L4S … Matteo Guarna S303434
- Re: [L4s-discuss] Irregularities in results from … Vidhi Goel
- Re: [L4s-discuss] Irregularities in results from … rjmcmahon
- Re: [L4s-discuss] Irregularities in results from … rjmcmahon
- Re: [L4s-discuss] Irregularities in results from … Sebastian Moeller
- Re: [L4s-discuss] Irregularities in results from … Koen De Schepper (Nokia)
- Re: [L4s-discuss] Irregularities in results from … Sebastian Moeller
- Re: [L4s-discuss] Irregularities in results from … Matteo Guarna S303434
- Re: [L4s-discuss] Irregularities in results from … Matteo Guarna S303434
- Re: [L4s-discuss] Irregularities in results from … Sebastian Moeller