Re: [ippm] [Rpm] Preliminary measurement comparison of "Working Latency" metrics
Dave Taht <dave.taht@gmail.com> Mon, 12 December 2022 02:39 UTC
Return-Path: <dave.taht@gmail.com>
X-Original-To: ippm@ietfa.amsl.com
Delivered-To: ippm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 15B34C14CE5D for <ippm@ietfa.amsl.com>; Sun, 11 Dec 2022 18:39:18 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.094
X-Spam-Level:
X-Spam-Status: No, score=-2.094 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id sSmDIO6Cu4Xx for <ippm@ietfa.amsl.com>; Sun, 11 Dec 2022 18:39:13 -0800 (PST)
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com [IPv6:2a00:1450:4864:20::435]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 3EDFEC14CE59 for <ippm@ietf.org>; Sun, 11 Dec 2022 18:39:13 -0800 (PST)
Received: by mail-wr1-x435.google.com with SMTP id h16so1745940wrz.12 for <ippm@ietf.org>; Sun, 11 Dec 2022 18:39:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=UZ0N7SsHg+W+4H+6xxB/EodfNCTxDCg/9/oWESGu13Q=; b=jRuUEV6g7smoU9iZYi6Yo29Ej0W7vdI1lj9dcnekAvIRddwpqNxiMJSVM69XikcVZM HtR0gUCyjfxOCIcBcSPAJQWnKXm6kr0HlVuBKmrtsFgef1kWMCqnD48wbFUbfredwEFU PgrQ9WZ60RUt98vARHirTG6g1U+9flLuhZX/pCZQjRSKDtk201JRXSmXyrfncdnb4waN K6ZbtYy6AMaTmTRrNn8aPbHGkAyAeWeHJuPa+HC4TcPOkZo9DgWMEBomFzjCtKk7A0HK x3G/kgWsFDXf31JKZZbLlnwsloCTp9eLgrKCjdqtrwAZgNt1byBWs/2uJN8X2XVzaCWu p11Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=UZ0N7SsHg+W+4H+6xxB/EodfNCTxDCg/9/oWESGu13Q=; b=PDaJuudhpgmRgRV0iD48t882NRozdeqBAJwbQQeMfx2Nsa3ncUeqoM4xwUZ/WlnoJL k/lMBaoOZCPey5pyL2eykrWL6iVyGpGxrwe8fHK0t5EqfCQaY9B9+Wl+DXaMlpSHDuPw E6yt+F/Awjj983BtEvWC+vAbjzyGId4hQ3MWfie5RPMxxpx2sP/I8z1LoSE00fWuZPRm v5ekRmH4GzE/LL5ygCcK5qAcdmnLgpUo589kBJ+/CMLQcTPuZfU5FimyU9W58SMcToDN TEFsSBXv4fmbvOqe7Ax3RAu6o5GvNccQHC5AkBvFtb4Uphiv3D1AFymz1t1PkEvp11Kd tu9g==
X-Gm-Message-State: ANoB5pkncwGZ4Jsr7F7jkN39g9l+nIBXpjC51eqQKOJZXsLfk+3gBxpd 5Z5tFSRtjh4/aOxcBt2FL/ombXEifiaFo3gHMs4=
X-Google-Smtp-Source: AA0mqf7iHujvhJpKoRYNJU8VFP6CEGkfNy8WV9088KagbZFWpPT3z8SSGi+p/kfJM0a3mhhNZTysMAGSsUZhwQnaYxA=
X-Received: by 2002:a5d:5b0c:0:b0:242:6777:c7e1 with SMTP id bx12-20020a5d5b0c000000b002426777c7e1mr9161829wrb.383.1670812751311; Sun, 11 Dec 2022 18:39:11 -0800 (PST)
MIME-Version: 1.0
References: <CH0PR02MB79808E2508E6AED66DC7657AD32E9@CH0PR02MB7980.namprd02.prod.outlook.com> <CH0PR02MB7980DFB52D45F2458782430FD3379@CH0PR02MB7980.namprd02.prod.outlook.com> <CH0PR02MB7980D3036BF700A074D902A1D3379@CH0PR02MB7980.namprd02.prod.outlook.com> <CAA93jw7Jb_77dZzr-AFjXPtwf_hBxhODyF5UzTX5a-A6+xMkWw@mail.gmail.com> <CH0PR02MB798097A6125BFF4DEAB584AFD3379@CH0PR02MB7980.namprd02.prod.outlook.com> <CAA93jw4ujKqrOiEPDB8pKHvn=AiA1xTamOi2SVv_RfP=_ANXZA@mail.gmail.com> <CAA93jw4vEYaj-HiHpB7=J=VxcpEaupWgQWwbdo=E_v3ngCae=Q@mail.gmail.com> <CH0PR02MB7980242F0FAEA23544344C13D3369@CH0PR02MB7980.namprd02.prod.outlook.com> <CH0PR02MB7980580D9E5BC1CED30F1B6ED33B9@CH0PR02MB7980.namprd02.prod.outlook.com> <F8482BEF-3888-4606-B546-12DA30BB6A6F@gmx.de> <CH0PR02MB79803C102746CEA96E8C7B1ED33A9@CH0PR02MB7980.namprd02.prod.outlook.com> <CH0PR02MB7980CA66D226F9D9CC4431D2D31E9@CH0PR02MB7980.namprd02.prod.outlook.com>
In-Reply-To: <CH0PR02MB7980CA66D226F9D9CC4431D2D31E9@CH0PR02MB7980.namprd02.prod.outlook.com>
From: Dave Taht <dave.taht@gmail.com>
Date: Sun, 11 Dec 2022 18:38:59 -0800
Message-ID: <CAA93jw59FY9Y2SwSfC5XjxP1_YvSXWQtofUW325SqHqPpz9Etg@mail.gmail.com>
To: "MORTON JR., AL" <acmorton@att.com>
Cc: Sebastian Moeller <moeller0@gmx.de>, Randall Meyer <rrm@apple.com>, Rpm <rpm@lists.bufferbloat.net>, Will Hawkins <hawkinsw@obs.cr>, IETF IPPM WG <ippm@ietf.org>, Pete Heist <pete@heistp.net>
Content-Type: multipart/alternative; boundary="0000000000008737dc05ef98684a"
Archived-At: <https://mailarchive.ietf.org/arch/msg/ippm/h8pJB7SHeJgdheAVIR9W1z9A6g8>
Subject: Re: [ippm] [Rpm] Preliminary measurement comparison of "Working Latency" metrics
X-BeenThere: ippm@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: IETF IP Performance Metrics Working Group <ippm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/ippm>, <mailto:ippm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/ippm/>
List-Post: <mailto:ippm@ietf.org>
List-Help: <mailto:ippm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/ippm>, <mailto:ippm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Mon, 12 Dec 2022 02:39:18 -0000
Adding in the author of irtt. You had a significant timer error rate in your 1ms testing. Modern hardware, particularly vms have great difficulty doing that below 3ms. Similarly the cloud can be highly noisy. I am presently testing a highly sdn - and virtualized networking fabric in a major DC... And the results are rather disturbing. I'd like to repeat your test suite inside that DC. On Sun, Dec 11, 2022, 11:21 AM MORTON JR., AL <acmorton@att.com> wrote: > Hi IPPM, > > Prior to IETF-115, I shared a series of measurements with the IPPM list. > We're looking at responsiveness and working latency with various metrics > and multiple testing utilities. This message continues the discussion with > new input. > > When I first published some measurements, Dave Taht added his assessment > and included other relevant email lists in the discussion. I'm continuing > to cross-post to all the lists in this thread. > > Dave originally suggested that I try a tool called irtt; I've done that > now and these are the results. > > Bob McMahon: I queued your request to try your iperf2 tool behind the irtt > measurements. I hope to make some more measurements this week... > > -=-=-=-=-=-=-=-=-=-=-=-=-=- > > We're testing a DOCIS 3.1 based service with 1Gbps down, nominally 22Mbps > up. I used wired ETH connected to the DOCSIS modem's switch. > > Dave Taht made his server available for the irtt measurements. I installed > irtt on a VM in my Macbook, the same VM that runs UDPST. I ran quite a few > tests to become familiar with irtt, so I'll just summarize the relevant > ones here. > > I ran irtt with its traffic at the maximum allowed on Dave's server. The > test duration was 10sec, with packets spaced at 1ms intervals from my VM > client. This is the complete output: > > ./irtt client -i 1ms -l 1250 -d 10s fremont.starlink.taht.net > > Min Mean Median Max Stddev > --- ---- ------ --- ------ > RTT 46.63ms 51.58ms 51.4ms 58ms 1.55ms > send delay 20.74ms 25.57ms 25.4ms 32.04ms 1.54ms > receive delay 25.8ms 26.01ms 25.96ms 30.48ms 219µs > > IPDV (jitter) 1.03µs 1.15ms 1.02ms 6.87ms 793µs > send IPDV 176ns 1.13ms 994µs 6.41ms 776µs > receive IPDV 7ns 79.8µs 41.7µs 4.54ms 140µs > > send call time 10.1µs 55.8µs 1.34ms 30.3µs > timer error 68ns 431µs 6.49ms 490µs > server proc. time 680ns 2.43µs 47.7µs 1.73µs > > duration: 10.2s (wait 174ms) > packets sent/received: 7137/7131 (0.08% loss) > server packets received: 7131/7137 (0.08%/0.00% loss up/down) > bytes sent/received: 8921250/8913750 > send/receive rate: 7.14 Mbps / 7.13 Mbps > packet length: 1250 bytes > timer stats: 2863/10000 (28.63%) missed, 43.10% error > acm@acm-ubuntu1804-1:~/goirtt/irtt$ > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > irtt supplies lots of info/stats about the RTT distribution. In the > lightly loaded (7.1Mbps) scenario above, the RTT range is about 12 ms. The > Mean and the Median are approximately the same. > irtt also supplies Inter-Packet Delay Varition (IPDV), showing that the > packets seem to be delayed more than the sending interval occasionally (Max > of 6.8ms). > irtt measurements indicate very low packet loss: no congestion at 7Mbps. > > For the opposite end of the congestion spectrum, I ran irtt with UDPST > (RFC 9097) running in parallel (and using the Type C search algorithm). We > pick-up a lot more RTT and wider RTT range in this scenario: > > irtt with udpst using Type C search = max load: > > Min Mean Median Max Stddev > --- ---- ------ --- ------ > RTT 47.58ms 118ms 56.53ms 301.6ms 90.28ms > send delay 24.05ms 94.85ms 33.38ms 278.5ms 90.26ms > receive delay 22.99ms 23.17ms 23.13ms 25.42ms 156µs > > IPDV (jitter) 162ns 1.04ms 733µs 6.36ms 1.02ms > send IPDV 3.81µs 1.01ms 697µs 6.24ms 1.02ms > receive IPDV 88ns 93µs 49.8µs 1.48ms 145µs > > send call time 4.28µs 39.3µs 903µs 32.4µs > timer error 86ns 287µs 6.13ms 214µs > server proc. time 670ns 3.59µs 19.3µs 2.26µs > > duration: 10.9s (wait 904.8ms) > packets sent/received: 8305/2408 (71.01% loss) > server packets received: 2408/8305 (71.01%/0.00% loss up/down) > bytes sent/received: 10381250/3010000 > send/receive rate: 8.31 Mbps / 2.47 Mbps > packet length: 1250 bytes > timer stats: 1695/10000 (16.95%) missed, 28.75% error > acm@acm-ubuntu1804-1:~/goirtt/irtt$ > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > The irtt measurement of RTT range is now about 250ms in this > maximally-loaded scenario. > One key RTT difference is the Mean delay, which is now twice the Median > with working load added by UDPST. > Congestion is evident in the irtt loss measurements = 71%. However, the > competing traffic did not break irtt's process or measurements. > UDPST's measurements were ~22Mbps Capacity (steady-state), with high loss, > and similar RTT range of 251 ms. > > > Additional tests included load from UDPST at fixed rates: 14Mbps and > 20Mbps. We compare the RTT range for all four conditions in the table below: > > Load with irtt irtt RTT range UDPST RTT range > ================================================== > irtt alone 12ms --- > UDPST at 14Mbps 11ms 22ms > UDPST at 20Mbps 14ms 46ms > UDPST at MaxCap 250ms 251ms > > The unexpected result with irtt measurements is that the RTT range did not > increase with load, where the UDPST RTT range increases with load. We're > assuming that the majority of delay increases occur in the DOCSIS upstream > queue and both test streams should see similar delay range as they do with > maximum load. Perhaps there are some differences owing to the periodic irtt > stream and the bursty UDPST stream (both are UDP), but this is speculation. > > To check that the test path was operating similarly with earlier tests, we > ran a couple of NetworkQuality and UDPST tests as before: > > Comparison of NQ-vs, UDPST -A c, same columns as defined earlier. > Working Latency & Capacity Summary (Upstream only) > (capacity in Mbps, delays in ms, h and m are RPM categories, High and > Medium) > > Net Qual UDPST (RFC 9097) > UpCap RPM DelLD DelMin UpCap(stable) RTTmin RTTrange > 22 276 m 217ms 11ms 23 28 0-252 > 22 291 m 206ms 11ms ~22* 28* 0-251* > > * UDPST test result with the ~7Mbps irtt stream present. > > We found that uplink test results are similar to previous tests, and the > new irtt results were collected under similar conditions. > > In conclusion, irtt provides a clear summary of the RTT distribution. > Minimum, Mean, Median and Max RTT are useful individually and in > combinations/comparisons. The irtt measurements compare favorably to those > collected during IP-Layer Capacity tests with the UDPST utility. > > comments welcome, > Al > > > -----Original Message----- > > From: ippm <ippm-bounces@ietf.org> On Behalf Of MORTON JR., AL > > Sent: Saturday, November 5, 2022 3:37 PM > > To: Sebastian Moeller <moeller0@gmx.de> > > Cc: Rpm <rpm@lists.bufferbloat.net>; Will Hawkins <hawkinsw@obs.cr>; > > ippm@ietf.org > > Subject: Re: [ippm] [Rpm] Preliminary measurement comparison of "Working > > Latency" metrics > > > > *** Security Advisory: This Message Originated Outside of AT&T ***. > > Reference http://cso.att.com/EmailSecurity/IDSP.html for more > information. > > > > Hi Sebastian, thanks for your comments and information. > > I'll reply to a few key points in this top-post. > > > > Sebastian wrote: > > > [SM] DOCSIS ISPs traditionally provision higher peak rates than > they > > > advertise, with a DOCSIS modem/router with > 1 Gbps LAN capacity (even > via > > > bonded LAG) people in Germany routinely measure TCP/IPv4/HTTP goodput > in the > > > 1050 Mbps range. But typically gigabit ethernet limits the practically > > > achievable throughput somewhat: > > [acm] > > Right, my ISP's CPE has Gig-E ports so I won't see any benefit from > over-rate > > provisioning. > > > > Sebastian wrote: > > ... > > > IPv4/TCP/RFC1323 timestamps payload: 1000 * ((1500 - 12 - 20 - > 20)/(1500 > > + 38 + 4)) = 939.04 Mbps > > ... > > > Speedtests tend to report IPv4/TCP timestamps might be on depending on > the > > OS, > > > but on-line speedtest almost never return the simple average rate over > the > > > measurement duration, but play "clever" tricks to exclude the start-up > phase > > > and to aggregate over multiple flows that invariably ending with > results > > that > > > tend to exceed the hard limits shown above... > > [acm] > > My measurements with the Ookla desktop app on MacOS (shared earlier this > week) > > are very consistent at ~940Mbps Dn-stream, so timestamps calculation > sounds > > right. > > My ISP specifies their service speed at 940Mbps, as though they assume > > subscribers will measure using Ookla or other TCP tool. In fact, our team > > hasn't seen such consistent results from Ookla or any TCP-based test in > the > > 1Gbps range, and this makes me wonder if there might be some > test-recognition > > here. > > > > FYI - UDPST uses a max packet size with is sub-MTU, to avoid > fragmentation > > when various encapsulations are encountered. Also, the definition of > IP-Layer > > Capacity in the various standards (e.g., RFC 9097) includes the bits in > the IP > > header in the total Capacity. > > > > So, instead of: > > > Ethernet payload rate +VLAN: 1000 * ((1500)/(1500 + 38 > + 4)) = > > 972.76 Mbps > > We have > > Ethernet payload rate +VLAN: 1000 * ((1222)/(1222 + 38 + 4)) = > 966.77 > > Mbps > > and why our Maximum IP-Layer Capacity measurements are approx ~967 Mbps > > > > UDPST has an option to use datagrams for the traditional 1500 octet MTU > (-T), > > but a user could cause a lot of fragmentation on links with > encapsulations wit > > this option. > > > > acm wrote: > > > > The comparison of networkQuality and goresponsiveness is somewhat > > confounded > > > > by the need to use the Apple server infrastructure for both these > methods > > ... > > Sebastian wrote: > > > [SM] Puzzled, I thought when comparing the two networkQuality > variants > > > having the same back-end sort of helps be reducing the differences to > the > > > clients? But comparison to UDPST suffers somewhat. > > [acm] > > Yes, I was hoping to match-up the client and server implementations. > Instead, > > this might be more of a Client-X and Server-Y interoperability test (and > could > > explain some results?), but that was not to be. > > > > Thanks for your suggested command line for gores. > > > > IIRC one of your early messages, Sebastian, your MacOS indicates that > 12.6 is > > the latest. It's the same for me: my sufficiently powerful MacBook cannot > > upgrade to Ventura for the latest in networkQuality versions. It would > help us > > who are doing some testing if the latest version of networkQuality could > be > > made installable for us, somehow... > > > > thanks again and regards, > > Al > > > > > > > -----Original Message----- > > > From: Sebastian Moeller <moeller0@gmx.de> > > > Sent: Friday, November 4, 2022 3:10 PM > > > To: MORTON JR., AL <acmorton@att.com> > > > Cc: Dave Täht <dave.taht@gmail.com>; rjmcmahon < > rjmcmahon@rjmcmahon.com>; > > Rpm > > > <rpm@lists.bufferbloat.net>; ippm@ietf.org; Will Hawkins < > hawkinsw@obs.cr> > > > Subject: Re: [Rpm] [ippm] Preliminary measurement comparison of > "Working > > > Latency" metrics > > > > > > Hi Al, > > > > > > > > > > On Nov 4, 2022, at 18:14, MORTON JR., AL via Rpm > > <rpm@lists.bufferbloat.net> > > > wrote: > > > > > > > > Hi all, > > > > > > > > I have been working through the threads on misery metrics and > lightweight > > > sensing of bw and buffering, and with those insights and gold-nuggets > in > > mind, > > > I'm iterating through testing the combinations of tools that Dave That > and > > Bob > > > McMahon suggested. > > > > > > > > earlier this week, Dave wrote: > > > >> How does networkQuality compare vs a vs your tool vs a vs > > goresponsiveness? > > > > > > > > goresponsiveness installed flawlessly - very nice instructions and > > getting- > > > started info. > > > > > > > > Comparison of NetQual (networkQuality -vs), UDPST -A c, > > > gores/goresponsiveness > > > > > > > > Working Latency & Capacity Summary on DOCSIS 3.1 access with 1 Gbps > down- > > > stream service > > > > (capacity in Mbps, delays in ms, h and m are RPM categories, High and > > > Medium) > > > > > > > > NetQual UDPST (RFC 9097) gores > > > > DnCap RPM DelLD DelMin DnCap RTTmin RTTrange DnCap > > RPM > > > DelLD > > > > 882 788 m 76ms 8ms 967 28 0-16 127 > > > (1382) 43ms > > > > 892 1036 h 58ms 8 966 27 0-18 128 > > > (2124) 28ms > > > > 887 1304 h 46ms 6 969 27 0-18 130 > > > (1478) 41ms > > > > 885 1008 h 60ms 8 967 28 0-22 127 > > > (1490) 40ms > > > > 894 1383 h 43ms 11 967 28 0-15 133 > > > (2731) 22ms > > > > > > > > NetQual UDPST (RFC 9097) gores > > > > UpCap RPM DelLD DelMin UpCap RTTmin RTTrange UpCap > > RPM > > > DelLD > > > > 21 327 m 183ms 8ms 22 (51) 28 0-253 12 > > > > 21 413 m 145ms 8 22 (43) 28 0-255 15 > > > > 22 273 m 220ms 6 23 (53) 28 0-259 10 > > > > 21 377 m 159ms 8 23 (51) 28 0-250 10 > > > > 22 281 m 214ms 11 23 (52) 28 0-250 6 > > > > > > > > These tests were conducted in a round-robin fashion to minimize the > > > possibility of network variations between measurements: > > > > NetQual - rest - UDPST-Dn - rest- UDPST-Up - rest - gores - rest - > repeat > > > > > > > > NetQual indicates the same reduced capacity in Downstream when > compared to > > > UDPST (940Mbps is the max for TCP payloads, while 967-970 is max for > IP- > > layer > > > capacity, dep. on VLAN tag). > > > > > > [SM] DOCSIS ISPs traditionally provision higher peak rates than > they > > > advertise, with a DOCSIS modem/router with > 1 Gbps LAN capacity (even > via > > > bonded LAG) people in Germany routinely measure TCP/IPv4/HTTP goodput > in the > > > 1050 Mbps range. But typically gigabit ethernet limits the practically > > > achievable throughput somewhat: > > > > > > Ethernet payload rate: 1000 * ((1500)/(1500 + > 38)) = > > > 975.29 Mbps > > > Ethernet payload rate +VLAN: 1000 * ((1500)/(1500 + 38 > + 4)) = > > > 972.76 Mbps > > > IPv4 payload (ethernet+VLAN): 1000 * ((1500 - 20)/(1500 > + 38 + 4)) > > = > > > 959.79 Mbps > > > IPv6 payload (ethernet+VLAN): 1000 * ((1500 - 40)/(1500 > + 38 + 4)) > > = > > > 946.82 Mbps > > > IPv4/TCP payload (ethernet+VLAN): 1000 * ((1500 - 20 - 20)/(1500 + 38 > > + > > > 4)) = 946.82 Mbps > > > IPv6/TCP payload (ethernet+VLAN): 1000 * ((1500 - 20 - 40)/(1500 + 38 > > + > > > 4)) = 933.85 Mbps > > > IPv4/TCP/RFC1323 timestamps payload: 1000 * ((1500 - 12 - 20 - > 20)/(1500 > > + > > > 38 + 4)) = 939.04 Mbps > > > IPv6/TCP/RFC1323 timestamps payload: 1000 * ((1500 - 12 - 20 - > 40)/(1500 > > + > > > 38 + 4)) = 926.07 Mbps > > > > > > > > > Speedtests tend to report IPv4/TCP timestamps might be on depending on > the > > OS, > > > but on-line speedtest almost never return the simple average rate over > the > > > measurement duration, but play "clever" tricks to exclude the start-up > phase > > > and to aggregate over multiple flows that invariably ending with > results > > that > > > tend to exceed the hard limits shown above... > > > > > > > > > > Upstream capacities are not very different (a factor that made TCP > methods > > > more viable many years ago when most consumer access speeds were > limited to > > > 10's of Megabits). > > > > > > [SM] My take on this is that this is partly due to the goal of > ramping > > > up very quickly and get away with a short measurement duration. That > causes > > > imprecision. As Dave said flent's RRUL test defaults to 60 seconds and > I > > often > > > ran/run it for 5 to 10 minutes to get somewhat more reliable numbers > (and > > for > > > timecourses to look at and reason about). > > > > > > > gores reports significantly lower capacity in both downstream and > upstream > > > measurements, a factor of 7 less than NetQual for downstream. > Interestingly, > > > the reduced capacity (taken as the working load) results in higher > > > responsiveness: RPM meas are higher and loaded delays are lower for > > > downstream. > > > > > > [SM] Yepp, if you only fill-up the queue partially you will > harvest less > > > queueing delay and hence retain more responsiveness, albeit this > really just > > > seems to be better interpreted as go-responsiveness failing to achieve > > > "working-conditions". > > > > > > I tend to run gores like this: > > > time ./networkQuality --config mensura.cdn-apple.com --port 443 --path > > > /api/v1/gm/config --sattimeout 60 --extended-stats --logger-filename > > > go_networkQuality_$(date +%Y%m%d_%H%M%S) > > > > > > --sattimeout 60 extends the time out for the saturation measurement > somewhat > > > (before I saw it simply failing on a 1 Gbps access link, it did give > some > > > diagnostic message though). > > > > > > > > > > > > > > The comparison of networkQuality and goresponsiveness is somewhat > > confounded > > > by the need to use the Apple server infrastructure for both these > methods > > (the > > > documentation provides this option - thanks!). I don't have admin > access to > > > our server at the moment. But the measured differences are large > despite the > > > confounding factor. > > > > > > [SM] Puzzled, I thought when comparing the two ntworkQuality > variants > > > having the same back-end sort of helps be reducing the differences to > the > > > clients? But comparison to UDPST suffers somewhat. > > > > > > > > > > > goresponsiveness has its own, very different output format than > > > networkQuality. There isn't a comparable "-v" option other than -debug > > (which > > > is extremely detailed). gores only reports RPM for the downstream. > > > > > > [SM] I agree it would be nice if gores would grow a sequential > mode as > > > well. > > > > > > > > > > I hope that these results will prompt more of the principle > evangelists > > and > > > coders to weigh-in. > > > > > > [SM] No luck ;) but at least "amateur hour" (aka me) is ready to > > > discuss. > > > > > > > > > > > It's also worth noting that the RTTrange reported by UDPST is the > range > > > above the minimum RTT, and represents the entire 10 second test. The > > > consistency of the maximum of the range (~255ms) seems to indicate that > > UDPST > > > has characterized the length of the upstream buffer during > measurements. > > > > > > [SM] thanks for explaining that. > > > > > > Regards > > > Sebastian > > > > > > > > > > > I'm sure there is more to observe that is prompted by these > measurements; > > > comments welcome! > > > > > > > > Al > > > > > > > > > > > >> -----Original Message----- > > > >> From: ippm <ippm-bounces@ietf.org> On Behalf Of MORTON JR., AL > > > >> Sent: Tuesday, November 1, 2022 10:51 AM > > > >> To: Dave Taht <dave.taht@gmail.com> > > > >> Cc: ippm@ietf.org; Rpm <rpm@lists.bufferbloat.net> > > > >> Subject: Re: [ippm] Preliminary measurement comparison of "Working > > Latency" > > > >> metrics > > > >> > > > >> *** Security Advisory: This Message Originated Outside of AT&T ***. > > > >> Reference http://cso.att.com/EmailSecurity/IDSP.html for more > > information. > > > >> > > > >> Hi Dave, > > > >> Thanks for trying UDPST (RFC 9097)! > > > >> > > > >> Something you might try with starlink: > > > >> use the -X option and UDPST will generate random payloads. > > > >> > > > >> The measurements with -X will reflect the uncompressed that are > possible. > > > >> I tried this on a ship-board Internet access: uncompressed rate was > > > 100kbps. > > > >> > > > >> A few more quick replies below, > > > >> Al > > > >> > > > >>> -----Original Message----- > > > >>> From: Dave Taht <dave.taht@gmail.com> > > > >>> Sent: Tuesday, November 1, 2022 12:22 AM > > > >>> To: MORTON JR., AL <acmorton@att.com> > > > >>> Cc: ippm@ietf.org; Rpm <rpm@lists.bufferbloat.net> > > > >>> Subject: Re: [ippm] Preliminary measurement comparison of "Working > > > Latency" > > > >>> metrics > > > >>> > > > >>> Dear Al: > > > >>> > > > >>> OK, I took your udpst tool for a spin. > > > >>> > > > >>> NICE! 120k binary (I STILL work on machines with only 4MB of > flash), > > > >>> good feature set, VERY fast, > > > >> [acm] > > > >> Len Ciavattone (my partner in crime on several measurement > projects) is > > the > > > >> lead coder: he has implemented many measurement tools extremely > > > efficiently, > > > >> this one in C-lang. > > > >> > > > >>> and in very brief testing, seemed > > > >>> to be accurate in the starlink case, though it's hard to tell with > > > >>> them as the rate changes every 15s. > > > >> [acm] > > > >> Great! Our guiding principle developing UDPST has been to test the > > accuracy > > > of > > > >> measurements against a ground-truth. It pays-off. > > > >> > > > >>> > > > >>> I filed a couple bug reports on trivial stuff: > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://github.com/BroadbandForum/obudpst/issues/8 > > > >>> > > > >> > > > > > > __;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUc > > > >>> SswH_opYmEw$ > > > >> [acm] > > > >> much appreciated... We have an OB-UDPST project meeting this > Friday, can > > > >> discuss then. > > > >> > > > >>> > > > >>> (Adding diffserv and ecn washing or marking detection would be a > nice > > > >>> feature to have) > > > >>> > > > >>> Aside from the sheer joy coming from the "it compiles! and runs!" > > > >>> phase I haven't looked much further. > > > >>> > > > >>> I left a copy running on one of my starlink testbeds - > > > >>> fremont.starlink.taht.net - if anyone wants to try it. It's > > > >>> instrumented with netperf, flent, irtt, iperf2 (not quite the > latest > > > >>> version from bob, but close), and now udpst, and good to about a > gbit. > > > >>> > > > >>> nice tool! > > > >> [acm] > > > >> Thanks again! > > > >> > > > >>> > > > >>> Has anyone here played with crusader? ( > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://github.com/Zoxc/crusader__;!!BhdT!iufMVqCy > > > >>> > oH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHm4Wtzjc$ > > > ) > > > >>> > > > >>> On Mon, Oct 31, 2022 at 4:30 PM Dave Taht <dave.taht@gmail.com> > wrote: > > > >>>> > > > >>>> On Mon, Oct 31, 2022 at 1:41 PM MORTON JR., AL <acmorton@att.com> > > wrote: > > > >>>> > > > >>>>>> have you tried irtt? > > > >>> > > > >> > > > > > ( > https://urldefense.com/v3/__https://github.com/heistp/irtt__;!!BhdT!iufMVqCyo > > > >>> > H_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHBSirIcE$ > > > ) > > > >>>>> I have not. Seems like a reasonable tool for UDP testing. The > feature > > I > > > >>> didn't like in my scan of the documentation is the use of > Inter-packet > > > delay > > > >>> variation (IPDV) instead of packet delay variation (PDV): > variation from > > > the > > > >>> minimum (or reference) delay. The morbidly curious can find my > analysis > > in > > > >> RFC > > > >>> 5481: > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/rfc5481__; > !! > > > >>> > > > >> > > > > > > BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nOIEUcSswHt > > > >>> T7QSlc$ > > > >>>> > > > >>>> irtt was meant to simulate high speed voip and one day > > > >>>> videoconferencing. Please inspect the json output > > > >>>> for other metrics. Due to OS limits it is typically only accurate > to a > > > >>>> 3ms interval. One thing it does admirably is begin to expose the > > > >>>> sordid sump of L2 behaviors in 4g, 5g, wifi, and other wireless > > > >>>> technologies, as well as request/grant systems like cable and > gpon, > > > >>>> especially when otherwise idle. > > > >>>> > > > >>>> Here is a highres plot of starlink's behaviors from last year: > > > >>>> https://urldefense.com/v3/__https://forum.openwrt.org/t/cake-w- > > adaptive- > > > >>> bandwidth- > > > >>> > > > >> > > > > > > historic/108848/3238__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vle > > > >>> KCIKpL3PBLwci5nOIEUcSswH_5Sms3w$ > > > >>>> > > > >>>> clearly showing them "optimizing for bandwidth" and changing next > sat > > > >>>> hop, and about a 40ms interval of buffering between these > switches. > > > >>>> I'd published elsewhere, if anyone cares, a preliminary study of > what > > > >>>> starlink's default behaviors did to cubic and BBR... > > > >>>> > > > >>>>> > > > >>>>> irtt's use of IPDV means that the results won’t compare with > UDPST, > > and > > > >>> possibly networkQuality. But I may give it a try anyway... > > > >>>> > > > >>>> The more the merrier! Someday the "right" metrics will arrive. > > > >>>> > > > >>>> As a side note, this paper focuses on RAN uplink latency > > > >>>> > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://dl.ifip.org/db/conf/itc/itc2021/1570740615 > > > >>> > > > >> > > > > > > .pdf__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO > > > >>> IEUcSswHgvqerjg$ which I think > > > >>>> is a major barrier to most forms of 5G actually achieving good > > > >>>> performance in a FPS game, if it is true for more RANs. I'd like > more > > > >>>> to be testing uplink latencies idle and with load, on all > > > >>>> technologies. > > > >>>> > > > >>>>> > > > >>>>> thanks again, Dave. > > > >>>>> Al > > > >>>>> > > > >>>>>> -----Original Message----- > > > >>>>>> From: Dave Taht <dave.taht@gmail.com> > > > >>>>>> Sent: Monday, October 31, 2022 12:52 PM > > > >>>>>> To: MORTON JR., AL <acmorton@att.com> > > > >>>>>> Cc: ippm@ietf.org; Rpm <rpm@lists.bufferbloat.net> > > > >>>>>> Subject: Re: [ippm] Preliminary measurement comparison of > "Working > > > >>> Latency" > > > >>>>>> metrics > > > >>>>>> > > > >>>>>> Thank you very much for the steer to RFC9097. I'd completely > missed > > > >>> that. > > > >>>>>> > > > >>>>>> On Mon, Oct 31, 2022 at 9:04 AM MORTON JR., AL < > acmorton@att.com> > > > >> wrote: > > > >>>>>>> > > > >>>>>>> (astute readers may have guessed that I pressed "send" too > soon on > > > >>> previous > > > >>>>>> message...) > > > >>>>>>> > > > >>>>>>> I also conducted upstream tests this time, here are the > results: > > > >>>>>>> (capacity in Mbps, delays in ms, h and m are RPM categories, > High > > > >> and > > > >>>>>> Medium) > > > >>>>>>> > > > >>>>>>> Net Qual UDPST (RFC9097) > > > >> Ookla > > > >>>>>>> UpCap RPM DelLD DelMin UpCap RTTmin RTTrange > > > >> UpCap > > > >>>>>> Ping(no load) > > > >>>>>>> 34 1821 h 33ms 11ms 23 (42) 28 0-252 > 22 > > > >>> 8 > > > >>>>>>> 22 281 m 214ms 8ms 27 (52) 25 5-248 > 22 > > > >>> 8 > > > >>>>>>> 22 290 m 207ms 8ms 27 (55) 28 0-253 > 22 > > > >>> 9 > > > >>>>>>> 21 330 m 182ms 11ms 23 (44) 28 0-255 > 22 > > > >>> 7 > > > >>>>>>> 22 334 m 180ms 9ms 33 (56) 25 0-255 > 22 > > > >>> 9 > > > >>>>>>> > > > >>>>>>> The Upstream capacity measurements reflect an interesting > feature > > > >> that > > > >>> we > > > >>>>>> can reliably and repeatably measure with UDPST. The first ~3 > seconds > > > >> of > > > >>>>>> upstream data experience a "turbo mode" of ~50Mbps. UDPST > displays > > > >> this > > > >>>>>> behavior in its 1 second sub-interval measurements and has a > bimodal > > > >>> reporting > > > >>>>>> option that divides the complete measurement interval in two > time > > > >>> intervals to > > > >>>>>> report an initial (turbo) max capacity and a steady-state max > > capacity > > > >>> for the > > > >>>>>> later intervals. The UDPST capacity results present both > > measurements: > > > >>> steady- > > > >>>>>> state first. > > > >>>>>> > > > >>>>>> Certainly we can expect bi-model distributions from many ISPs, > as, > > for > > > >>>>>> one thing, the "speedboost" concept remains popular, except > that it's > > > >>>>>> misnamed, as it should be called speed-subtract or speed-lose. > Worse, > > > >>>>>> it is often configured "sneakily", in that it doesn't kick in > for the > > > >>>>>> typical observed duration of the test, for some, they cut the > > > >>>>>> available bandwidth about 20s in, others, 1 or 5 minutes. > > > >>>>>> > > > >>>>>> One of my biggest issues with the rpm spec so far is that it > should, > > > >>>>>> at least, sometimes, run randomly longer than the overly short > > > >>>>>> interval it runs for and the tools also allow for manual > override of > > > >>> length. > > > >>>>>> > > > >>>>>> we caught a lot of tomfoolery with flent's rrul test running by > > > >> default > > > >>> for > > > >>>>>> 1m. > > > >>>>>> > > > >>>>>> Also, AQMs on the path can take a while to find the optimal > drop or > > > >> mark > > > >>> rate. > > > >>>>>> > > > >>>>>>> > > > >>>>>>> The capacity processing in networkQuality and Ookla appear to > report > > > >>> the > > > >>>>>> steady-state result. > > > >>>>>> > > > >>>>>> Ookla used to basically report the last result. Also it's not a > good > > > >>>>>> indicator of web traffic behavior at all, watching the curve > > > >>>>>> go up much more slowly in their test on say, fiber 2ms, vs > starlink, > > > >>>>>> (40ms).... > > > >>>>>> > > > >>>>>> So adding another mode - how quickly is peak bandwidth actually > > > >>>>>> reached, would be nice. > > > >>>>>> > > > >>>>>> I haven't poked into the current iteration of the > goresponsiveness > > > >>>>>> test at all: > https://urldefense.com/v3/__https://github.com/network- > > > >>>>>> > > > >>> > > > >> > > > > > > quality/goresponsiveness__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvb > > > >>>>>> K2bOqw0uMPbFeJ7PxzxTc48iQFubYTxxmyA$ it > > > >>>>>> would be good to try collecting more statistics and histograms > and > > > >>>>>> methods of analyzing the data in that libre-source version. > > > >>>>>> > > > >>>>>> How does networkQuality compare vs a vs your tool vs a vs > > > >>> goresponsiveness? > > > >>>>>> > > > >>>>>>> I watched the upstream capacity measurements on the Ookla app, > and > > > >>> could > > > >>>>>> easily see the initial rise to 40-50Mbps, then the drop to > ~22Mbps > > for > > > >>> most of > > > >>>>>> the test which determined the final result. > > > >>>>>> > > > >>>>>> I tend to get upset when I see ookla's new test flash a peak > result > > in > > > >>>>>> the seconds and then settle on some lower number somehow. > > > >>>>>> So far as I know they are only sampling the latency every 250ms. > > > >>>>>> > > > >>>>>>> > > > >>>>>>> The working latency is about 200ms in networkQuality and about > 280ms > > > >>> as > > > >>>>>> measured by UDPST (RFC9097). Note that the networkQuality > minimum > > > >> delay > > > >>> is > > > >>>>>> ~20ms lower than the UDPST RTTmin, so this accounts for some of > the > > > >>> difference > > > >>>>>> in working latency. Also, we used the very dynamic Type C load > > > >>>>>> adjustment/search algorithm in UDPST during all of this testing, > > which > > > >>> could > > > >>>>>> explain the higher working latency to some degree. > > > >>>>>>> > > > >>>>>>> So, it's worth noting that the measurements needed for > assessing > > > >>> working > > > >>>>>> latency/responsiveness are available in the UDPST utility, and > that > > > >> the > > > >>> UDPST > > > >>>>>> measurements are conducted on UDP transport (used by a growing > > > >> fraction > > > >>> of > > > >>>>>> Internet traffic). > > > >>>>>> > > > >>>>>> Thx, didn't know of this work til now! > > > >>>>>> > > > >>>>>> have you tried irtt? > > > >>>>>> > > > >>>>>>> > > > >>>>>>> comments welcome of course, > > > >>>>>>> Al > > > >>>>>>> > > > >>>>>>>> -----Original Message----- > > > >>>>>>>> From: ippm <ippm-bounces@ietf.org> On Behalf Of MORTON JR., > AL > > > >>>>>>>> Sent: Sunday, October 30, 2022 8:09 PM > > > >>>>>>>> To: ippm@ietf.org > > > >>>>>>>> Subject: Re: [ippm] Preliminary measurement comparison of > "Working > > > >>>>>> Latency" > > > >>>>>>>> metrics > > > >>>>>>>> > > > >>>>>>>> > > > >>>>>>>> Hi again RPM friends and IPPM'ers, > > > >>>>>>>> > > > >>>>>>>> As promised, I repeated the tests shared last week, this time > > > >> using > > > >>> both > > > >>>>>> the > > > >>>>>>>> verbose (-v) and sequential (-s) dwn/up test options of > > > >>> networkQuality. I > > > >>>>>>>> followed Sebastian's calculations as well. > > > >>>>>>>> > > > >>>>>>>> Working Latency & Capacity Summary > > > >>>>>>>> > > > >>>>>>>> Net Qual UDPST > > > >>> Ookla > > > >>>>>>>> DnCap RPM DelLD DelMin DnCap RTTmin RTTrange > > > >>> DnCap > > > >>>>>>>> Ping(no load) > > > >>>>>>>> 885 916 m 66ms 8ms 970 28 0-20 > > > >> 940 > > > >>> 8 > > > >>>>>>>> 888 1355 h 44ms 8ms 966 28 0-23 > > > >> 940 > > > >>> 8 > > > >>>>>>>> 891 1109 h 54ms 8ms 968 27 0-19 > > > >> 940 > > > >>> 9 > > > >>>>>>>> 887 1141 h 53ms 11ms 966 27 0-18 > > > >> 937 > > > >>> 7 > > > >>>>>>>> 884 1151 h 52ms 9ms 968 28 0-20 > > > >> 937 > > > >>> 9 > > > >>>>>>>> > > > >>>>>>>> With the sequential test option, I noticed that networkQuality > > > >>> achieved > > > >>>>>> nearly > > > >>>>>>>> the maximum capacity reported almost immediately at the start > of a > > > >>> test. > > > >>>>>>>> However, the reported capacities are low by about 60Mbps, > > > >> especially > > > >>> when > > > >>>>>>>> compared to the Ookla TCP measurements. > > > >>>>>>>> > > > >>>>>>>> The loaded delay (DelLD) is similar to the UDPST RTTmin + (the > > > >> high > > > >>> end of > > > >>>>>> the > > > >>>>>>>> RTTrange), for example 54ms compared to (27+19=46). Most of > the > > > >>>>>> networkQuality > > > >>>>>>>> RPM measurements were categorized as "High". There doesn't > seem to > > > >>> be much > > > >>>>>>>> buffering in the downstream direction. > > > >>>>>>>> > > > >>>>>>>> > > > >>>>>>>> > > > >>>>>>>>> -----Original Message----- > > > >>>>>>>>> From: ippm <ippm-bounces@ietf.org> On Behalf Of MORTON JR., > AL > > > >>>>>>>>> Sent: Monday, October 24, 2022 6:36 PM > > > >>>>>>>>> To: ippm@ietf.org > > > >>>>>>>>> Subject: [ippm] Preliminary measurement comparison of > "Working > > > >>> Latency" > > > >>>>>>>>> metrics > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> Hi RPM friends and IPPM'ers, > > > >>>>>>>>> > > > >>>>>>>>> I was wondering what a comparison of some of the "working > > > >> latency" > > > >>>>>> metrics > > > >>>>>>>>> would look like, so I ran some tests using a service on > DOCSIS > > > >>> 3.1, with > > > >>>>>> the > > > >>>>>>>>> downlink provisioned for 1Gbps. > > > >>>>>>>>> > > > >>>>>>>>> I intended to run apple's networkQuality, UDPST (RFC9097), > and > > > >>> Ookla > > > >>>>>>>> Speedtest > > > >>>>>>>>> with as similar connectivity as possible (but we know that > the > > > >>> traffic > > > >>>>>> will > > > >>>>>>>>> diverge to different servers and we can't change that > aspect). > > > >>>>>>>>> > > > >>>>>>>>> Here's a quick summary of yesterday's results: > > > >>>>>>>>> > > > >>>>>>>>> Working Latency & Capacity Summary > > > >>>>>>>>> > > > >>>>>>>>> Net Qual UDPST Ookla > > > >>>>>>>>> DnCap RPM DnCap RTTmin RTTVarRnge DnCap > > > >>> Ping(no > > > >>>>>> load) > > > >>>>>>>>> 878 62 970 28 0-19 941 > 6 > > > >>>>>>>>> 891 92 970 27 0-20 940 > 7 > > > >>>>>>>>> 891 120 966 28 0-22 937 > 9 > > > >>>>>>>>> 890 112 970 28 0-21 940 > 8 > > > >>>>>>>>> 903 70 970 28 0-16 935 > 9 > > > >>>>>>>>> > > > >>>>>>>>> Note: all RPM values were categorized as Low. > > > >>>>>>>>> > > > >>>>>>>>> networkQuality downstream capacities are always on the low > side > > > >>> compared > > > >>>>>> to > > > >>>>>>>>> others. We would expect about 940Mbps for TCP, and that's > mostly > > > >>> what > > > >>>>>> Ookla > > > >>>>>>>>> achieved. I think that a longer test duration might be > needed to > > > >>> achieve > > > >>>>>> the > > > >>>>>>>>> actual 1Gbps capacity with networkQuality; intermediate > values > > > >>> observed > > > >>>>>> were > > > >>>>>>>>> certainly headed in the right direction. (I recently > upgraded to > > > >>>>>> Monterey > > > >>>>>>>> 12.6 > > > >>>>>>>>> on my MacBook, so should have the latest version.) > > > >>>>>>>>> > > > >>>>>>>>> Also, as Sebastian Moeller's message to the list reminded > me, I > > > >>> should > > > >>>>>> have > > > >>>>>>>>> run the tests with the -v option to help with comparisons. > I'll > > > >>> repeat > > > >>>>>> this > > > >>>>>>>>> test when I can make time. > > > >>>>>>>>> > > > >>>>>>>>> The UDPST measurements of RTTmin (minimum RTT observed during > > > >> the > > > >>> test) > > > >>>>>> and > > > >>>>>>>>> the range of variation above the minimum (RTTVarRnge) add-up > to > > > >>> very > > > >>>>>>>>> reasonable responsiveness IMO, so I'm not clear why RPM > graded > > > >>> this > > > >>>>>> access > > > >>>>>>>> and > > > >>>>>>>>> path as "Low". The UDPST server I'm using is in NJ, and I'm > in > > > >>> Chicago > > > >>>>>>>>> conducting tests, so the minimum 28ms is typical. UDPST > > > >>> measurements > > > >>>>>> were > > > >>>>>>>> run > > > >>>>>>>>> on an Ubuntu VM in my MacBook. > > > >>>>>>>>> > > > >>>>>>>>> The big disappointment was that the Ookla desktop app I > updated > > > >>> over the > > > >>>>>>>>> weekend did not include the new responsiveness metric! I > > > >> included > > > >>> the > > > >>>>>> ping > > > >>>>>>>>> results anyway, and it was clearly using a server in the > nearby > > > >>> area. > > > >>>>>>>>> > > > >>>>>>>>> So, I have some more work to do, but I hope this is > interesting- > > > >>> enough > > > >>>>>> to > > > >>>>>>>>> start some comparison discussions, and bring-out some > > > >> suggestions. > > > >>>>>>>>> > > > >>>>>>>>> happy testing all, > > > >>>>>>>>> Al > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> > > > >>>>>>>>> _______________________________________________ > > > >>>>>>>>> ippm mailing list > > > >>>>>>>>> ippm@ietf.org > > > >>>>>>>>> > > > >>>>>>>> > > > >>>>>> > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd > > > >>>>>>>>> > > > >>>>>>>> > > > >>>>>> > > > >>> > > > >> > > > > > > T!hd5MvMQw5eiICQbsfoNaZBUS38yP4YIodBvz1kV5VsX_cGIugVnz5iIkNqi6fRfIQzWef_xKqg4$ > > > >>>>>>>> > > > >>>>>>>> _______________________________________________ > > > >>>>>>>> ippm mailing list > > > >>>>>>>> ippm@ietf.org > > > >>>>>>>> > > > >>>>>> > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd > > > >>>>>>>> T!g- > > > >>>>>> > > > >>> > > > FsktB_l9MMSGNUge6FXDkL1npaKtKcyDtWLcTZGpCunxNNCcTImH8YjC9eUT262Wd8q1EBpiw$ > > > >>>>>>> > > > >>>>>>> _______________________________________________ > > > >>>>>>> ippm mailing list > > > >>>>>>> ippm@ietf.org > > > >>>>>>> > > > >>>>>> > > > >>> > > > >> > > > > > > https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd > > > >>>>>> > > > >>> > > > >> > > > > > > T!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxTc48iQFub_gMs > > > >>>>>> KXU$ > > > >>>>>> > > > >>>>>> > > > >>>>>> > > > >>>>>> -- > > > >>>>>> This song goes out to all the folk that thought Stadia would > work: > > > >>>>>> > https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the- > > > >>> mushroom- > > > >>>>>> song-activity-6981366665607352320- > > > >>>>>> > > > >>> > > > >> > > > > > > FXtz__;!!BhdT!giGhURYxqguQCyB3NT8rE0vADdzxcQ2eCzfS4NRMsdvbK2bOqw0uMPbFeJ7PxzxT > > > >>>>>> c48iQFub34zz4iE$ > > > >>>>>> Dave Täht CEO, TekLibre, LLC > > > >>>> > > > >>>> > > > >>>> > > > >>>> -- > > > >>>> This song goes out to all the folk that thought Stadia would work: > > > >>>> > https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the- > > > >>> mushroom-song-activity-6981366665607352320- > > > >>> > > > >> > > > > > > FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO > > > >>> IEUcSswHLHDpSWs$ > > > >>>> Dave Täht CEO, TekLibre, LLC > > > >>> > > > >>> > > > >>> > > > >>> -- > > > >>> This song goes out to all the folk that thought Stadia would work: > > > >>> > https://urldefense.com/v3/__https://www.linkedin.com/posts/dtaht_the- > > > >> mushroom- > > > >>> song-activity-6981366665607352320- > > > >>> > > > >> > > > > > > FXtz__;!!BhdT!iufMVqCyoH_CQ2AXdNJV1QSjl_7srzb_IznWE87U6E583vleKCIKpL3PBLwci5nO > > > >>> IEUcSswHLHDpSWs$ > > > >>> Dave Täht CEO, TekLibre, LLC > > > >> _______________________________________________ > > > >> ippm mailing list > > > >> ippm@ietf.org > > > >> > > > > > > https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd > > > >> > T!jOVBx7DlKXbDMiZqaYSUhBtkSdUvfGpYUyGvLerdLsLBJZPMzEGcbhC9ZSzsZOd1dYC- > > > rDt9HLI$ > > > > _______________________________________________ > > > > Rpm mailing list > > > > Rpm@lists.bufferbloat.net > > > > > > > > > > https://urldefense.com/v3/__https://lists.bufferbloat.net/listinfo/rpm__;!!Bhd > > > T!h8K1vAtpaGSUHpuVMl5sZgi7k-f64BEaV91ypoUokPjn57v_79iCnp7W- > > > mERYCyuCd9e9PY3aNLkSw$ > > > > _______________________________________________ > > ippm mailing list > > ippm@ietf.org > > > https://urldefense.com/v3/__https://www.ietf.org/mailman/listinfo/ippm__;!!Bhd > > > T!h-glaezufaCxidYk1xzTF48dbEz67JPIJjPA_nweL8YvDu6Z3TmG0A37k_DQ15FIzzwoeCLOEaw$ >
- [ippm] Preliminary measurement comparison of "Wor… MORTON JR., AL
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- Re: [ippm] Preliminary measurement comparison of … Dave Taht
- Re: [ippm] [Rpm] Preliminary measurement comparis… rjmcmahon
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- Re: [ippm] [Rpm] Preliminary measurement comparis… MORTON JR., AL
- Re: [ippm] [Rpm] Preliminary measurement comparis… rjmcmahon
- Re: [ippm] Preliminary measurement comparison of … Dave Taht
- Re: [ippm] Preliminary measurement comparison of … Dave Taht
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- [ippm] lightweight active sensing of bandwidth an… Dave Taht
- Re: [ippm] lightweight active sensing of bandwidt… rjmcmahon
- Re: [ippm] [Rpm] lightweight active sensing of ba… Sebastian Moeller
- Re: [ippm] [Rpm] lightweight active sensing of ba… Ruediger.Geib
- Re: [ippm] [Rpm] lightweight active sensing of ba… rjmcmahon
- Re: [ippm] [Rpm] lightweight active sensing of ba… rjmcmahon
- Re: [ippm] [Rpm] lightweight active sensing of ba… Dave Taht
- Re: [ippm] [Rpm] lightweight active sensing of ba… rjmcmahon
- Re: [ippm] [Rpm] lightweight active sensing of ba… Sebastian Moeller
- Re: [ippm] [Rpm] lightweight active sensing of ba… Sebastian Moeller
- Re: [ippm] [Rpm] lightweight active sensing of ba… Ruediger.Geib
- Re: [ippm] [Rpm] lightweight active sensing of ba… Sebastian Moeller
- Re: [ippm] [Rpm] lightweight active sensing of ba… Ruediger.Geib
- Re: [ippm] [Rpm] lightweight active sensing of ba… Sebastian Moeller
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- Re: [ippm] Preliminary measurement comparison of … rjmcmahon
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- Re: [ippm] [Rpm] Preliminary measurement comparis… Sebastian Moeller
- Re: [ippm] [Rpm] Preliminary measurement comparis… MORTON JR., AL
- Re: [ippm] Preliminary measurement comparison of … Randall Meyer
- Re: [ippm] Preliminary measurement comparison of … MORTON JR., AL
- Re: [ippm] [Rpm] Preliminary measurement comparis… MORTON JR., AL
- Re: [ippm] [Rpm] Preliminary measurement comparis… Dave Taht