Re: [L4s-discuss] Configuring a L4S test plant
Sebastian Moeller <moeller0@gmx.de> Fri, 06 October 2023 18:13 UTC
Return-Path: <moeller0@gmx.de>
X-Original-To: l4s-discuss@ietfa.amsl.com
Delivered-To: l4s-discuss@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 636F5C14CEE3 for <l4s-discuss@ietfa.amsl.com>; Fri, 6 Oct 2023 11:13:54 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.554
X-Spam-Level:
X-Spam-Status: No, score=-2.554 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmx.de
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id PGGLDvMeDpMO for <l4s-discuss@ietfa.amsl.com>; Fri, 6 Oct 2023 11:13:50 -0700 (PDT)
Received: from mout.gmx.net (mout.gmx.net [212.227.15.19]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id C3DD3C151984 for <l4s-discuss@ietf.org>; Fri, 6 Oct 2023 11:13:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1696616027; x=1697220827; i=moeller0@gmx.de; bh=45VQXgwg/v+COmzH2ZX6pLlGypEHen+wa2NsieGXD20=; h=X-UI-Sender-Class:Date:From:To:Subject:In-Reply-To:References; b=FbIXdZgMSEZGvunZujgX4sSnBI8oQonqIrBRJAAuEIQU3erKqLHcY9nWYPidxZZ+uzzFewO1LJf vxCDnfiu3tQH3QFotYl052M7BVmAWfmOa1jKF5lmF1fjMEAwvUsQWJUj7iGmIqTfbcfi3aCBkwBDM EOXI0p1g7Nw6mbtHy1iNKsFWDdNmZt+8SJvRWpzG3rAAHhSKLzrGAl7S5s4wz0u39ITiThaIb9vDm arz2c2HHkOBGDpWsNdumU1SxJEuC+av2q9kM1tk/nQjuZ5HZy0GQ+lRvO1hAwJMgMYy9p2AOBR6CA k3QNL2JVQJX4ZKFrwdWC1xVCr4KzjP3g2EbA==
X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a
Received: from [127.0.0.1] ([77.6.72.254]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MsHs0-1riQoz0w3e-00tood; Fri, 06 Oct 2023 20:13:47 +0200
Date: Fri, 06 Oct 2023 20:13:45 +0200
From: Sebastian Moeller <moeller0@gmx.de>
To: Matteo Guarna S303434 <matteo.guarna@studenti.polito.it>, l4s-discuss@ietf.org
User-Agent: K-9 Mail for Android
In-Reply-To: <6b41901e9971a496ed2243a20cc6f022@studenti.polito.it>
References: <7952e11516cc7b25484b53ae1380d88c@studenti.polito.it> <230D9924-C32F-4DE8-8BBD-F3D35D94B05B@gmx.de> <b82b81e36e168f6e627798d8cd588db8@studenti.polito.it> <A3BEF415-8574-4854-93D5-7CD1DB7B60F5@gmx.de> <CADVnQynOTd3FsHRk-BG5BTTmEYaM3JdnPj5qJQ9BHOqY_SPwsQ@mail.gmail.com> <727ed5bc3df58dff2e23115a8165b9b2@studenti.polito.it> <AM9PR07MB73139697A34A0A74DBF29300B9CAA@AM9PR07MB7313.eurprd07.prod.outlook.com> <7366EED7-BE11-464A-B2FE-A12129F4E370@gmx.de> <6b41901e9971a496ed2243a20cc6f022@studenti.polito.it>
Message-ID: <3659F631-9CEC-4F9D-84A4-85084ED7D4BD@gmx.de>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:oMuHb1ezUwyrqnHJ4kci/2fKF8voUfKflymLDQuAjG8q3n4yBHG Ux/pgQFOd6fR0Fmnrp+mXOE32AlLmDs9ZOF1VN6j25PsWG3KMZSK/zDEcbseTJRFWVi3XSF OxH7H+Y2801V5NfpftGvUGVzBDu9qPNtD/Xca2DS04oV23xPv2ebeTM1u6htbLiR/kMZ2kd m1O+9+f1kG08Jdsepe6uA==
UI-OutboundReport: notjunk:1;M01:P0:2nXLmPtCvvk=;MdboHdcF0XRO/hfHU3f7t7AxGRj OGVshihu7bjZZg3BOqQc88WSUh3QdGenNNUB6z7MSUudZ5WoROkFbhqzeJ0ubFifa57qFdCK5 Di6RpYx8sS6UMhxDAuSGmzQcB+wKVqk1VBcQzuGX/os2WDXyVp6GQTjieyUt/z0SWB3Jb5DU/ HzQEJIdairHqv/crz/ZeBTrCe5rmJ5i37SdpYz69Q6Is/aTlS4CxluJHfokxwIzwayvdz4QO9 Xdiu9SVTSTsZpeWXTtM3Za3BfNbSJGJ01RCXJArG3NQIF+o++wytg+Kc6iYdc1gPB8hnNq8W6 hHyAO4fakWgTVO+8PVIpMR7HfAWTy28e0R6phT8ISIbTsV8rQgSq9pPDBYe5vzWlX7ikL5VNj 1cW5uEvLKSCLhhz6dsUhMgsuKdFgFm27Ob3yFQtH30MGl/C2Q/Z4vwqFmkZcFXcorYy22T4R4 OFBAcwjVveRPLk2c2+BpY1xg3p6O5w3JQc3YExI0Y4bsb8UWIiJWT1rZOFhiMq3s1E2EJVY/S 0uXxLpS3Aom4ThlMP6hP86QK0/P/j+DjviSOLF1fgWBZmYlFxeZ1lSYHdXWIS3Zv7c+Fre23F HYV7N2d7MyJS7NiDcIEMa8tslYtBrhvPmoD5g62BiIFVos6+E6SKuBTjkIsS9d6jwZAGiJzUP OwI6682tHdPxjbqLjo/62nWZKtbMH6eH9XArlsMQ++xqcxrk0CIMppMPNV7t9X111H6sdnrdL 4XmIBqjgOmU0JHxZiZ16nU+Qy7efZA2o5R+U7GZ0+tNnhAO8m9fOAa4DtuHCzq0uRlv4dw//j uv6j/8mMuK2XNSM67p2l86yrA8ofSNszTPqqur11olYO57wSKlKOG0DYTxz58ITl2/jq2+Cea 7MZnFzq0d4+f64n+l8t3RTAhbni/OmpK+lEhSoKpfgH48yccXd7t+to51NK/hueqQlswThp+e 1VBUBQ==
Archived-At: <https://mailarchive.ietf.org/arch/msg/l4s-discuss/DjdF7_QAZ8TLrjVL-XT6LAlfMmE>
Subject: Re: [L4s-discuss] Configuring a L4S test plant
X-BeenThere: l4s-discuss@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: "Low Latency, Low Loss, Scalable Throughput \(L4S\) " <l4s-discuss.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/l4s-discuss>, <mailto:l4s-discuss-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/l4s-discuss/>
List-Post: <mailto:l4s-discuss@ietf.org>
List-Help: <mailto:l4s-discuss-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/l4s-discuss>, <mailto:l4s-discuss-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 06 Oct 2023 18:13:54 -0000
Hi Matteo, On 6 October 2023 14:31:28 CEST, Matteo Guarna S303434 <matteo.guarna@studenti.polito.it> wrote: >Hi Koen, Hi Sebastian, thank you for reaching out to me > >On 2023-10-06 09:09 Sebastian Moeller wrote: >> Hi Koen, >> >> >>> On Oct 5, 2023, at 21:43, Koen De Schepper (Nokia) <koen.de_schepper=40nokia-bell-labs.com@dmarc.ietf.org> wrote: >>> >>> Hi Matteo, >>> >>> Does your topology have a base RTT configured on your bottleneck, or is it just LAN type of idle ping (like a few µs's)? The Prague version we have on the testing branch does not have the "below minimum window of 2" requirement implemented. This means that every Prague flow will have at least 2 packets in flight per RTT, with as a result that Prague flows on such low RTTs get a very big rate and become unresponsive when it should reduce its window below 2packets per Xµs. As a result the DualPI2 AQM will go in overload protection mode and will drop both Prague and Classic traffic, >> >> [SM] Looking at the Prague trace, I do not seem to see dupACKs or >> other obvious signs of drops (I might be doing it wrong though). In >> the cubic trace I do see dupACKs, so there drops do happen... >> > [MG] Apologize for the confusion I might have created: in fact I have a server configured in bridge mode with the role of impairment, introducing 15 ms delay in each direction. It's placed after the bottlenck between the router and the clients. I am proving you an attachment with an updated schema of my testplant, including the impairment. > >> >> >>> where Prage with only 2 packets in flight gets a high chance of losing all packets in flight and goes in RTO (I believe 200ms timeouts), >> >> [SM]In the prague trace there are no signs of 200ms timeouts... >> >> >>> causing a very low average rate. Cubic has many packets in flight and in a buffer, and usually never gets RTO events, so can keep its more normal rate. >>> >>> If you add a netem latency on the middle router of 5 to 20ms (which is typical in the real internet today) you will see that this issue is not appearing (at least not at 100Mbps anymore). > > [MG] As I was saying previously I did, sorry for not having pointed that out earlier. On a more technical sidenote though: I used an external machine as impairment because I could not introduce delay directly on the routers' interfaces. In fact I use "tc qdisc" to configure the dualpi2 on both the router's NICs and I cannot add a second rule (e.g., netem) at the same time, but only replace the dualpi2 (which I should not obviously). Do you have some tips to have both the dualpi2 and the delay with netem running at the same time? [SM] Assuming your router is powerful enough, us an ifb attached the ingress of each interface an instantiate netem onto those. Or us a veth pair to achieve the same (convert an ingress interface into a virtual egress so you can attach qdiscs). Whether that works well, I can not tell, as I never used netem. > >> >> [SM] Speaking of 20ms, is the RTT-de-bias mode on be default in >> Prague, and if so what value is the reference RTT set to? >> >> >>> >>> In our latest IETF interop we demonstrated the same problem on our PON (FttH) emulation setup, showing that this 'below minimum window of 2 packets' requirement is important to implement. But also in this case we assumed that the server is on an edge cloud, near to the Access network to have such low base latencies (of 300µs RTT). >> >> [SM] QUESTION: since most deployed PONs seem to use some sort of >> request/grant mechanism (e.g. DBA in GPON) is 0.3ms actually an >> achievable RTT or would that not be closer to 1-2 ms? >> >> >>> As a result we have implemented a new Prague update that can go to rates of below 2 packets in flight per RTT, and will be controllable down to 100kbps on any RTT below 25ms (even 0µs, and at that low rate gets theoretically 91% marking for L4S and 20% loss for Classic). >>> So as a second testcase, you could try keeping that really low base RTT, and use the Prague kernel on the "ratebase" branch. Note this is an alpha version, not fully regression tested by us, but since you are facing the problem, you can use it. If you find any issues, contact me or Chia-Yu in CC. >>> > [MG] I might try, that thank you for the information. Still, I meant to operate at higher RTT (30 ms at the very least so that is the scenario that is more of interest to me). > >Thank you for your time, >Matteo > >>> Success, >>> Koen. >>> >>> >>> -----Original Message----- >>> From: Matteo Guarna S303434 <matteo.guarna@studenti.polito.it> >>> Sent: Thursday, October 5, 2023 6:22 PM >>> To: l4s-discuss@ietf.org >>> Subject: Re: [L4s-discuss] Configuring a L4S test plant >>> >>> Hi Neal, >>> >>> thank you for reaching me. I executed the script on both the prague and the cubic server as you asked. >>> >>> The prague server has IP address 192.168.202.21, and transmits data towards 192.168.201.17 The cubic server has IP address 192.168.202.22, and transmits data towards 192.168.201.18 >>> >>> All connections lasted for 20 seconds and were established via iperf3 in reverse mode >>> >>> Please forgive me for having the date on the two machines out of sync (the flows had in fact started at the same time): >>> - the transmission timestamp on the prague server begins at Thu Oct 5 2023, 05:22:50 PM CEST >>> - the transmission timestamp on the cubic server begins at Fri Sep 29 2023, 01:37:53, CEST >>> >>> I am providing you with the captures as attachments to this mail: I named them with the "prague" and "cubic" suffixes after the servers where the capture took place. >>> >>> >>> If you need more information please don't hesitate to contact me >>> >>> Best regards and thank you in advance, >>> >>> Matteo Guarna >>> >>> >>> >>> Il 2023-10-04 17:18 Neal Cardwell ha scritto: >>>> Thanks for the report, Matteo. >>>> >>>> To help debug this, could you please gather and share the following >>>> instrumentation during one of your tests? This would need to be >>>> collected on both data senders (servers), as root: >>>> >>>> (while true; do date; ss -tenmoi; sleep 1; done) > /root/ss.txt & >>>> tcpdump -w /root/dump.pcap -n -s 100 -c 1000000 host $REMOTE_HOST -i >>>> $INTERFACE & nstat -n; (while true; do date; nstat; sleep 1; done) > >>>> /root/nstat.txt & >>>> >>>> The data should probably only be needed for the time interval starting >>>> from before the test and ending when the flows reach steady state, >>>> which may be 10-20 secs into the test. >>>> >>>> thanks, >>>> neal >>>> >>>> On Wed, Oct 4, 2023 at 6:03 AM Sebastian Moeller <moeller0@gmx.de> >>>> wrote: >>>> >>>>> Hi Matteo, >>>>> >>>>>> On Oct 4, 2023, at 11:48, Matteo Guarna S303434 >>>>> <matteo.guarna@studenti.polito.it> wrote: >>>>>> >>>>>> Hi Sebastian and thank you for your answer >>>>>> >>>>>> Il 2023-10-03 16:39 Sebastian Moeller ha scritto: >>>>>>> Hi Matteo. >>>>>>>> On Oct 3, 2023, at 15:42, Matteo Guarna S303434 >>>>> <matteo.guarna@studenti.polito.it> wrote: >>>>>>>> Greetings everyone, >>>>>>>> I hope the question isn't too off-topic, please forgive me in >>>>> advance if it is so. >>>>>>>> I am still trying to perform some fairness measurements with >>>>> both L4S and classic flow, although now on a physical test plant >>>>> instead of a virtualized one. I'm relying on the L4STeam Github >>>>> project for the deployment of the L4S architecture and I am looking >>>>> for someone who's familiar with the project and might be willing to >>>>> help me: in fact I seem not to be able to achieve the correct >>>>> configuration. >>>>>>>> My setup is very simple: I have four servers (two senders and >>>>> two receivers) exchanging two traffic flows through one server acting >>>>> as a router. One client-server pair uses Prague as CC, while the >>>>> other uses Cubic. All servers have the patched kernel provided in the >>>>> https://github.com/L4STeam/linux/ repository branch. >>>>>>>> If I trigger a congestion on the router by generating both the >>>>> Prague and the Cubic flows (let's say the flows measure 100 Mbit/s >>>>> each, and they come though a L2 switch both on the same router's >>>>> input interface on a 1Gb Ethernet link; only a 100M link though is in >>>>> place on the output interface towards the receivers) I see the L4S >>>>> flow having higher delay, higher jitter and a smaller (and more >>>>> variable) bandwidth share. The Prague share is 1/4 of the Cubic >>>>> share. I am sending an attachment with a graphical representation of >>>>> the scenario here described. >>>>>>>> I configured my L4S endpoints as follows: >>>>>>>> - I set the CC as tcp Prague (sysctl -w >>>>> net.ipv4.tcp_congestion_control=prague) >>>>>>>> - I set the AccEcn, even if it's not necessary apparently >>>>> (sysctl -w net.ipv4.tcp_ecn=3) >>>>>>>> - I disabled the required offloading capabilities on the >>>>> endpoints (sudo ethtool -K $NETIF tso off gso off gro off lro off) >>>>>>> [SM] I think you need to do the same on the router... or >>>>> with your >>>>>>> topology with running prague and cubic over separate end-points >>>>>>> especially on the router itself. Side-node, sch_cake grew a >>>>> split-gso >>>>>>> mode to automatically handle this issue because it can be a bit >>>>> of a >>>>>>> whack-a-mole problem to make these configs stick (and in the case >>>>> of >>>>>>> cake the idea was to make deployment easy even for non-experts). >>>>>> >>>>>> [MG] I tried as you suggested and unfortunately the situation >>>>> remains unvaried. >>>>> >>>>> [SM2] Hmmm, that would indicate that it might not be "lumpyness" of >>>>> inputs into the router. I guess I would take packet captures on both >>>>> interfaces of the router to see whether there is any unexpected >>>>> distribution of packets between both input and output? Also worth >>>>> looking is the CPU usage on the router... we occasionally run into >>>>> issues with aggressive? >>>>> power/voltage/frequency scaling where a CPU might take much longer to >>>>> wake up than expected, the L-queue with its rather low (IMHO too >>>>> low) reference delay of 1ms would be especially sensitive to such >>>>> issues. >>>>> Also does your 100Mbps interface support BQL? >>>>> >>>>>> Still, I think I missed the point regarding sch_cake, could you >>>>> explain again what it is and if and how could it be useful? >>>>> >>>>> [SM2] I am talking about Linux's cake qdisc and just as example, cake >>>>> does not support special treatment of ECT(1) but implements rfc3168 >>>>> ECN signaling for both ECT(0) and ECT(1). So for your experiments it >>>>> might not be that useful (but for the fun of it, maybe try it as >>>>> alternative for DualQ) I just mentioned it as an example for a qdisc >>>>> that opted for not simply disabling all offloads. After all these >>>>> offloads are quite useful, as they can considerably reduce the CPU of >>>>> networking. (GSO/GRO work by ameliorating the somewhat fixed >>>>> per-packet cost of Linux network-stack over multiple ethernet frames, >>>>> as long as the increased deelay inherent in such bathing approaches >>>>> this can help a lot). >>>>> >>>>>> Apologize, I guess I perfectly fit into the definition of "non >>>>> experts". I tried to look it up on the internet but I struggled to >>>>> find any clarification. >>>>> >>>>> [SM2] Sorry, my bad, I should have been clearer that I was talkning >>>>> about a qdisc here, see "man tc-cake" on a sufficietly modern Linux >>>>> system, the source code file is called sch_cake.c (see e.g. >>>>> https://elixir.bootlin.com/linux/latest/source/net/sched/sch_cake.c) >>>>> >>>>>> >>>>>>>> - I configured the fair queue on the endpoints (sudo tc qdisc >>>>> replace dev $NETIF root fq) >>>>>>>> I configured my router as follows: >>>>>>>> - I enabled forwarding through these interfaces to obtain the >>>>> routing capabilities (sudo sysctl -w net.ipv4.ip_forward=1) >>>>>>>> - I set the dualpi2 on both interfaces (sudo tc qdisc replace >>>>> dev $NETIF root dualpi2) >>>>>>>> I then applied the fair queue and disabled the offloading >>>>> capabilities on both my classic endpoints to ensure that the classic >>>>> and l4s flows act as fairly as possible, but to no avail (even >>>>> without these precautions the results remain roughly the same). >>>>>>> [SM] Again, I think with your topology offloads at the >>>>> endpoints >>>>>>> should not have much influence, but at the router the well might. >>>>> If >>>>>>> that turns out to help this might be explained by Prague's >>>>> (and/or >>>>>>> DualQ's L-queue) considerably higher sensitivity to bursty >>>>> traffic >>>>>>> compared to classic traffic and queue. >>>>>>>> I am sure I am missing some important details in the setup, and >>>>> I would really appreciate some help. >>>>>>> [SM] To me this looks rather straight forward, and I >>>>> probably would >>>>>>> try something similar, but I did not actually try in practice. >>>>>>> Regards & good luck >>>>>>> Sebastian >>>>>> >>>>>> [MG] Thanks in advance for your help, and if you have other >>>>> tips or if you (or anyone else for that matter) are by any chance >>>>> aware of a paper or project using the prague branch of the L4STeam >>>>> repository, that might indeed be really helpful too. >>>>> >>>>> [SM] I am not the best/most objective person to quizz here, as I >>>>> consider L4S in general too little too late and neither TCP Prague >>>>> nor the DualQ AQM worth deploying in their current state (but that is >>>>> why I consider your effort researching these admirable, both IMHO >>>>> really need more research direly). >>>>> >>>>> I would always try to run the same tests over a bottleneck using a >>>>> fq-scheduler, be it the all in one cake or fq_codel. Fq_codel >>>>> actually con be configured to treat ECT(1) mire in line with what TCP >>>>> Prague desires, so that might well be a decent starting point for >>>>> alternative measurements.... >>>>> >>>>> Regards >>>>> Sebastian >>>>> >>>>>> >>>>>> My best regards to you and the community, Matteo >>>>>> >>>>>>>> Regards, >>>>>>>> Matteo >>>>>>>> P.s. >>>>>>>> I just want to point out that by looking at the packet traces >>>>> everything seems fine: Prague carries the ECN=1, the dualpi2 marks >>>>> packets with ECN=3, the AccEcn control signals on the ACE fields are >>>>> coherent, and no losses occur in the Prague flow, while they do >>>>> happen with the Cubic flow. It looks like Prague is underperforming >>>>> for whatever reason. Furthermore, if I switch back to two Cubic flows >>>>> I measure perfect share, equal delay and equal jitter, so it looks to >>>>> me like there are no physical impairments on the >>>>> testbed.<testplant_issue.pdf>-- >>>>>>>> L4s-discuss mailing list >>>>>>>> L4s-discuss@ietf.org >>>>>>>> https://www.ietf.org/mailman/listinfo/l4s-discuss >>>>>> >>>>>> -- >>>>>> L4s-discuss mailing list >>>>>> L4s-discuss@ietf.org >>>>>> https://www.ietf.org/mailman/listinfo/l4s-discuss >>>>> >>>>> -- >>>>> L4s-discuss mailing list >>>>> L4s-discuss@ietf.org >>>>> https://www.ietf.org/mailman/listinfo/l4s-discuss >>> -- >>> L4s-discuss mailing list >>> L4s-discuss@ietf.org >>> https://www.ietf.org/mailman/listinfo/l4s-discuss -- Sent from my Android device with K-9 Mail. Please excuse my brevity.
- [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Neal Cardwell
- Re: [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Neal Cardwell
- Re: [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Matteo Guarna S303434
- Re: [L4s-discuss] Configuring a L4S test plant Sebastian Moeller
- Re: [L4s-discuss] Configuring a L4S test plant Koen De Schepper (Nokia)