[tsvwg] What to put into multibit congestion information RE: New Version Notification for draft-huang-tsvwg-transport-challenges-00.txt

"Shihang(Vincent)" <shihang9@huawei.com> Fri, 20 October 2023 08:03 UTC

Return-Path: <shihang9@huawei.com>
X-Original-To: tsvwg@ietfa.amsl.com
Delivered-To: tsvwg@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id E7625C15109F; Fri, 20 Oct 2023 01:03:00 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.205
X-Spam-Level:
X-Spam-Status: No, score=-2.205 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, GB_FAKE_RF_SHORT=2, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_ZEN_BLOCKED_OPENDNS=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01, URIBL_DBL_BLOCKED_OPENDNS=0.001, URIBL_ZEN_BLOCKED_OPENDNS=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([50.223.129.194]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id DZiuqH0WOodg; Fri, 20 Oct 2023 01:02:56 -0700 (PDT)
Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id E3AF2C14CE4A; Fri, 20 Oct 2023 01:02:55 -0700 (PDT)
Received: from lhrpeml500001.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4SBcT26Fplz6K67D; Fri, 20 Oct 2023 16:00:22 +0800 (CST)
Received: from kwepemi500021.china.huawei.com (7.221.188.245) by lhrpeml500001.china.huawei.com (7.191.163.213) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 20 Oct 2023 09:02:51 +0100
Received: from kwepemi500020.china.huawei.com (7.221.188.8) by kwepemi500021.china.huawei.com (7.221.188.245) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Fri, 20 Oct 2023 16:02:49 +0800
Received: from kwepemi500020.china.huawei.com ([7.221.188.8]) by kwepemi500020.china.huawei.com ([7.221.188.8]) with mapi id 15.01.2507.031; Fri, 20 Oct 2023 16:02:49 +0800
From: "Shihang(Vincent)" <shihang9@huawei.com>
To: "Ruediger.Geib@telekom.de" <Ruediger.Geib@telekom.de>, "moeller0@gmx.de" <moeller0@gmx.de>
CC: "tsvwg@ietf.org" <tsvwg@ietf.org>, "ccwg@ietf.org" <ccwg@ietf.org>
Thread-Topic: What to put into multibit congestion information RE: [tsvwg] New Version Notification for draft-huang-tsvwg-transport-challenges-00.txt
Thread-Index: AdoDIQ8qiJj2c4r8SBC7KmbewCT5GQ==
Date: Fri, 20 Oct 2023 08:02:49 +0000
Message-ID: <23231a5c196a48aea0b09bdd46575882@huawei.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.112.41.128]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-CFilter-Loop: Reflected
Archived-At: <https://mailarchive.ietf.org/arch/msg/tsvwg/5u3sccgAoEMmDA28kU2x4cRTcOs>
Subject: [tsvwg] What to put into multibit congestion information RE: New Version Notification for draft-huang-tsvwg-transport-challenges-00.txt
X-BeenThere: tsvwg@ietf.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: Transport Area Working Group <tsvwg.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tsvwg/>
List-Post: <mailto:tsvwg@ietf.org>
List-Help: <mailto:tsvwg-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tsvwg>, <mailto:tsvwg-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Oct 2023 08:03:01 -0000

Hi Ruediger and Sebastian,
Add CCWG into the loop. 
We propose advanced ECN(See: https://datatracker.ietf.org/doc/draft-shi-ccwg-advanced-ecn/) which carries multiple bits congestion information in the packet. Each router can modify it to indicate the cumulative congestion state along the path.
It includes the following congestion information:
1. Inflight ratio which is taken from HPCC(See: https://datatracker.ietf.org/doc/draft-miao-ccwg-hpcc/ )
2. DRE(Discounting Rate Estimator) which is taken from CONGA(See: https://doi.org/10.1145/2619239 )
3. Queue Utilization Ratio indicate the queue occupancy.
4. Queue Delay in ms. 
5. Congested Hops. 
Comments and suggestions are welcomed. 

More comments inline marked as [HS]

Thanks,
Hang

-----Original Message-----
From: tsvwg <tsvwg-bounces@ietf.org> On Behalf Of Ruediger.Geib@telekom.de
Sent: Thursday, October 19, 2023 10:49 PM
To: moeller0@gmx.de
Cc: tsvwg@ietf.org
Subject: Re: [tsvwg] New Version Notification for draft-huang-tsvwg-transport-challenges-00.txt

Hi Sebastian,

no issue, I didn't research anything either. As I've mentioned, I'm engineering and validating queue configs by profession.

My point on the dequeue bandwidth below is, that 90% queue occupancy means different things in terms of delay, if dequeue bandwidth  starts to be variable.  Same, if an AQM is configured: say, start dropping at 35% queue occupancy and increase drop rate by some function up to 50% when reaching queue limit. I'd expect the customer app experience with 90% queue occupancy and a corresponding drop in a range of around 40%  to be rather low (if these 90% queue occupancy are ever reached). 

Where I want to get is, that preconditions and assumptions need to be well defined, before standardising signaling of more congestion information on Internet scale. Some examples: Should the mechanisms to be defined assume constant dequeue bandwidth at least at some timescale > RTT, that should be documented. If these mechanisms work best, if AQM is disabled and there's a fixed buffer size & tail-drop, that should be documented. And so on. 
No doubt, in suitable environments, extended congestion feedback is useful, and some publications indicate that. I don't expect you or others to document these preconditions and assumptions right now in all detail - I'm out for consent on the general approach.

[HS] I agree that the preconditions and assumptions are useful to document but we should avoid too fine-grained assumption such as the AQM. Because there will be many devices along the path, it is hard to unify the AQM behavior for all devices.
 
Regards,

Ruediger


-----Ursprüngliche Nachricht-----
Von: Sebastian Moeller <moeller0@gmx.de>
Gesendet: Donnerstag, 19. Oktober 2023 15:31
An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
Cc: tsvwg@ietf.org
Betreff: Re: [tsvwg] New Version Notification for draft-huang-tsvwg-transport-challenges-00.txt

Hi Ruediger,


I again want to stress, that I have not done any research in this area nor am I about to do so, so please take my positions as subjective opinions only.


> On Oct 18, 2023, at 13:30, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
> 
> Hi Sebastian,
> 
> by your response I'd assume, that by now not everything is clear. An example. Related to buffer fill level, let's assume a 50ms@10Mbit/s max buffer. Assume any reason why Dequeue-Bandwidth is variable (several exist in practice, e.g., shared media). Plain tail drop, no AQM.
> 
> 90percBuf@Dequeue Bandwidth = Value[ms]
> 90percBuf@10 Mbit/s = 45ms
> 90percBuf@5 Mbit/s = 90ms
> 90percBuf@2 Mbit/s = 225ms

	[SM2] Assuming this is from consecutive packets, a receiver of that information likely would need to low pass filter the signal since it will not be able to respond with that timing... but once you do that I see no conceptual problem


> 
>> - is dequeue bandwidth constant (at which value)? 
> 
> 	[SM] This is a link property outside of the scope of the congestion information.

	[SM2] Forgot to caveat, that this is just an assumption, not a "fact"...

> 
> [RG] If your statement generally holds, that simplifies the issue. Do you assume a particular AQM technology being deployed, like Codel/PIE, when you've replied?

	[SM2] I had not even thought about that. If we look at e.g. RED (with ECN) it will essentially adapt its marking probability/rate in relation to the buffer occupancy, but if we already write out the buffer occupancy, we really only need an additional AQM for the old style ECN users and potentially for overload handling... but I see no strict requirement for any form of AQM on top of the buffer occupancy information (which might actually e helpful if this is to implemented in e.g. switches).


> If so, I'd appreciate a statement in the response.

	[SM2] As I said, I would on principle try to avoid tail drop, but if that is all the hardware supports, then so be it ;)


> Presence of the latter also pays on the issue of signaling some kind of AQM.

	[SM] When either the max sojourn time or buffer occupancy along a path becomes available to the flows, maybe the desire for more involved AQMs goes away?


> Parallel activity of several independently operating transport optimization mechanisms may create new issues.

	[SM] This certainly should be properly tested ;)

> 
> Regards,
> 
> Ruediger
> 
> -----Ursprüngliche Nachricht-----
> Von: Sebastian Moeller <moeller0@gmx.de>
> Gesendet: Mittwoch, 18. Oktober 2023 12:33
> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
> Cc: tsvwg@ietf.org
> Betreff: Re: [tsvwg] New Version Notification for 
> draft-huang-tsvwg-transport-challenges-00.txt
> 
> Hi Ruediger,
> 
> 
>> On Oct 18, 2023, at 10:37, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
>> 
>> Hi Sebastian,
>> 
>> the buffer occupancy information is helpful to adjust utilization of a bottleneck by the transport control loop, no doubt. 
>> 
>> Which information is required, and when can it be signaled (I'm no HW programming expert)?
> 
> 	[SM] I have not done any research in this area so can at best speculate. The papers I did look at did show a substantial improvement for some form of higher resolution information. Which information is the best (or generally good enough) I can not tell, I guess there is some research still to be done in that direction. My naive? take is that having some generic IP header or UDP option for such information might actually help such research. But for putting things in real hardware it might be best to wait for the results of that research, or converge on say three possibly useful pieces of information (each preferably at a fixed address) and have hardware fill in the one it is configured to deliver?
> 
> 
>> - should buffer be indicated in Byte or ms@dequeue-bandwidth?
> 
> 	[SM] Or maybe just in relative depth like percentage filled?
> 
>> - is that total buffer, available buffer or consumed buffer?
> 
> 	[SM] Total buffer would likely be static and hence not informative, available and consumed buffer (at least if reported in relative percentages) encode the same so here we would just need to define something for consistency, no?
> 
>> - is a standard fill-level of the buffer to be signaled?
> 
> 	[SM] Don't know.
> 
>> - is the fill-level of the buffer measured across a time interval? If yes, on which timescale?
> 
> 	[SM] I would guess instantaneous filling state would be desirable, 
> but this can be updated with a bit of slack (maybe no need to pollute 
> the fast path with too frequent updates?)
> 
> 
>> - is presence and configuration of some AQM or tail-drop to be signaled?
> 
> 	[SM No idea.
> 
>> Or would a buffer with signaling be preferentially be operated by tail-drop?
> 
> 	[SM] My limited experience tells me that tail-drop generally is to be avoided if possible, even head drop tends to do better. That said if push comes to shove and the buffer is essentially DOSed I guess the cheapest/best would still be tail drop (or drop even earlier if possible?).
> 
>> - is an indication of packet drop required (a loss rate over an interval)? 
> 
> 	[SM] I would guess not, each flow should be aware of its own packet loss. I also am not sure whether network operators generally would like to "broadcast" droprates out, as that way could be (undeservedly) bad PR?
> 
>> - is dequeue bandwidth constant (at which value)? 
> 
> 	[SM] This is a link property outside of the scope of the congestion information. I would say from the perspective of the control loop it does not matter all that much whether congestion was caused by a reduction in egress rate of by more flows traversing the bottleneck.
> 
> 
>> - if dequeue bandwidth is varying - would an average dequeue bandwidth make sense, and if yes, on which timescale?
> 
> 	[SM] Good questions. I guess this information would be interesting to some, but I am not sure whether a congestion control loop would care all that much. After all a bottleneck's total egress rate is far less relevant than a flow's achievable capacity share, which will be <= total egress rate. Hmm, maybe knoing total rate would allow to put a hard ceiling on the rate/cwin probing code? Not sure whether that would be worth the hassle.
> 
> 
>> - how are time-intervals for averaging functions determined and made known to a sender? one standard interval, a configurable interval, several intervals, how is that interval signaled, if variable and/or multiple?
> 
> 	[SM] I would keep this as simple as possible, the bottleneck probably knows best and hence should make this call... since I do not think we want a real bi-directional communication with the network nodes (well, above the "please fill in the buffer occupancy field, thanks in advance" request) so we would need to write out that interval information into each individual packet, no?
> 
> 
>> - how is buffer occupancy measured
> 
> 	[SM] Left to the hardware?
> 
>> and when can it be signaled?
> 
> 	[SM] In each individual packet, that is IMHO one of the big advantages over ECN and especially L4S, each packet should contain an unambiguous indication of the path "congestion" level, reducing the need to average at the end-points and making the full information available to all traversing flows (L4S will encode a proxy buffer filling state into a rate code via probabilistic marking, but that marking frequency code will be spread across all traversing packets, hence each flow will require to average over multiple packets to get an idea about the true marking probability).
> 
> 
>> By packet sojurn-time measurement (the first signaling indication occurs, after that packet has been scheduled)? By a packet mark at next packet to be scheduled, once a threshold has been passed?
> 
> 	[SM] If we want to send up to date per-packet information this should be written at de-queue time, and IMHO this should be written unconditionally of thresholds. That way an endpoint might be able to predict the buffer filling dynamics and might tailor its own (slow-start?) response accordingly, signaling only after a threshold might be too late. However there will be some "natural" thresholding simply be quantisation of what ever information into a limited bitfield, no?
> 
> 
>> 
>> To me, many options and no clear view, which choices are the best ones.
> 
> 	[SM] +1; this seems open to experimentation though. 
> 
> 
>> If this is to be solved by a HW function at routers frequently operating bottlenecks, it should scale well and work reliably. The resulting extended transport control loop should work for some years to come, as I think incremental improvements of such a mechanism at Internet scale may be difficult.
> 
> 	[SM] Agreed. 
> 
> Regards
> 	Sebastian
> 
>> 
>> Regards,
>> 
>> Ruediger
>> 
>> -----Ursprüngliche Nachricht-----
>> Von: Sebastian Moeller <moeller0@gmx.de>
>> Gesendet: Dienstag, 17. Oktober 2023 12:17
>> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
>> Cc: tsvwg@ietf.org
>> Betreff: Re: [tsvwg] New Version Notification for 
>> draft-huang-tsvwg-transport-challenges-00.txt
>> 
>> Hi Ruediger,
>> 
>> 
>>> On Oct 17, 2023, at 11:22, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
>>> 
>>> Hi Sebastian,
>>> 
>>> [SM] Or start with a signaling system generic enough to allow for variable information?
>>> 
>>> Certainly one approach. We still need to determine which node 
>>> signals what to which endpoint, and under which operational 
>>> conditions. Again a list
>>> - dequeue bandwidth (now/prediction)
>>> - queue depth ([ms]@dequeue bandwidth/Byte - of the last dequeued 
>>> packet or in total)
>>> - packets lost (within which interval)
>>> - a timestamp
>>> - ?
>>> 
>>> When designing a protocol (extension) at IP layer +/- 1 to extract details on node performance, there should be a proven benefit for all stakeholders involved. And there's a security/trust aspect.
>> 
>> 	[SM] I fully agree with your points.
>> 
>> Personally, ever since I read Arslan, Serhat, and Nick McKeown. ‘Switches Know the Exact Amount of Congestion’. In Proceedings of the 2019 Workshop on Buffer Sizing, 1–6, 2019. I came to the prediction that something like max(bufferoccupancy) over a network path in either predicted sojourntime or percentual buffer filling could really improve congestion control if included in all packets. That paper and follow ups on that idea are to my quite convincing. I see no real security concern as the network node adding this information might as well have dropped the packet if it wanted to harm that flow, so little increase in attack surface in that direction. What I hope something like this might allow is a gentler exist from slow start (assuming one can measure and compare the dynamics of the congestion indicator with those of the slow-starting flow to predict when to exist slow start without first having to dump ~2 too much data into the network in one RTT). The network could indirectly profit if end-points use this information to better reign in congestion pro-actively, but that clearly is not guaranteed, especially over the open internet. 
>> I have not done any research in this area, so I would not be surprised if there would be other measures one could signal back that would work similarly or better, so from an experimentalists perspective having a standardized way to "convey" information of this type to the end points would be nice. I do think that you are probably right that for full deployment it might be way too early and the experiments shoukd be run and analyzed first. But isn't that what informative and/or experimental track RFCs are intended for, allowing coordinated experiments without having to reinvent the wheel?
>> 
>> 
>>> No need to provide a complete answer or a deep discussion. We're at that beginning. To me all that reads like "let's try some". HW support in an operational environment should be based on reliable, standardised solutions with proven benefit.
>> 
>> 	[SM] I do not claim to have enough insight into the current 
>> literature, but I assume there are still experiments to run before 
>> putting things in hardware ;)
>> 
>> Regards
>> 	Sebastian
>> 
>> P.S.: I still think that ideally the end-points would know about signs of imminent congestion/queueing, and that (some) network nodes would know about each packet's RTT (that would e.g. help in flow queuing AQM's to tailor the AQM signaling to the expected best-case dynamics of the response). Yet the former seems benign in that it offers no real novel way of attack, while the second feels like it could easily be abused somehow.
>> 
>> 
>>> 
>>> Regards,
>>> 
>>> Rüdiger
>>> 
>>> -----Ursprüngliche Nachricht-----
>>> Von: Sebastian Moeller <moeller0@gmx.de>
>>> Gesendet: Dienstag, 17. Oktober 2023 10:40
>>> An: Geib, Rüdiger <Ruediger.Geib@telekom.de>
>>> Cc: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>;
>>> tsvwg@ietf.org
>>> Betreff: Re: [tsvwg] New Version Notification for 
>>> draft-huang-tsvwg-transport-challenges-00.txt
>>> 
>>> Hi Ruerdiger, all,
>>> 
>>>> On Oct 17, 2023, at 10:30, <Ruediger.Geib@telekom.de> <Ruediger.Geib@telekom.de> wrote:
>>>> 
>>>> Hi Ingemar,
>>>> 
>>>> thanks - also Rachel mentioned conditions suddenly reducing available wireless bandwidth, which aren't caused by congestion.
>>> 
>>> 	[SM] Is this distinction really all that relevant? Congestion happens when the ingress traffic volume exceeds the egress traffic for "too long" (for varying definitions of too long). Whether this is caused by the ingress traffic exceeding a fixed egress rate (e.g. new flows traversing the wireless link), or whether the egress rate drops below the "fixed" ingress rate results in the same effect, more load than the link can handle. If the rate reductions (or load spikes) are very short than it might make sense to add some emergency buffers to "iron" these wrinkles (spikes/drops) out, but if they last long enough the endpoints should be told to slow down... Sure the causality is slightly different but the immediate remedy is independent of what the root cause was.
>>> 
>>> 
>>>> It's certainly worth investigating options to signal these. I personally would expect limits however, unless the terminal movement and wireless coverage are known to the sender in detail for the "coming RTT" time intervals. The train moving moves where it is more than once - there's "some" predictability. I can't judge, if that's allowing for signaled IP transport improvements. 
>>>> 
>>>> We may try to signal "changes" to optimize IP transport. Congestion was the starting point. Bit errors. Competing flows. Weather. Power. Speed. Direction. Physical Environment. Antenna density. No judgement about good/bad/reasonable - just a list, certainly incomplete.
>>> 
>>> 	[SM] I guess this is an argument for Rachel's proposal, if these pieces of information can help the endpoints to adapt their sending behaviour better there might be value in informing them, no?
>>> 
>>> 
>>>> 
>>>> ITU-T works on QoE by trying to figure out, which "impairments" expose the strongest correlation with application quality, and do so by varying single "impairments" during tests (in total, there are different impairments, e.g. bit errors, delay, missing application frames and so on). Would it make sense to figure out, which single changes impair IP transport either strong or very often, and then investigate options to signal these? If a mix of causes, all simultaneously impacting IP transport properties, have an impact, countermeasures will get more complex. Speed/Antenna density/number of competing flows will impact delay, bandwidth and bit errors simultaneously (please correct, if I'm wrong). Maybe it's better to start with an environment offering less variables.
>>> 
>>> 	[SM] Or start with a signaling system generic enough to allow for variable information?
>>> 
>>> Regards
>>> 	Sebastian
>>> 
>>> 
>>>> 
>>>> Regards,
>>>> 
>>>> Ruediger
>>>> 
>>>> 
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Ingemar Johansson S <ingemar.s.johansson@ericsson.com>
>>>> Gesendet: Dienstag, 17. Oktober 2023 09:18
>>>> An: Huangyihong (Rachel)
>>>> <rachel.huang=40huawei.com@dmarc.ietf.org>;
>>>> Geib, Rüdiger <Ruediger.Geib@telekom.de>
>>>> Cc: tsvwg@ietf.org; Ingemar Johansson S 
>>>> <ingemar.s.johansson@ericsson.com>
>>>> Betreff: RE: [tsvwg] fwd: New Version Notification for 
>>>> draft-huang-tsvwg-transport-challenges-00.txt
>>>> 
>>>> Hi Rachel and Ruediger + all
>>>> 
>>>> Page 17-18 in the material below shows the throughput and latency as one walk away from an indoor radio (a.k.a) Radio Dot. What can be seen is that with L4S it is possible to keep a low latency until the point where one gets into Non-LoS (Line of Sight) as I moved into another corridor.
>>>> https://github.com/EricssonResearch/scream/blob/master/L4S-Results.
>>>> p
>>>> d
>>>> f
>>>> ?raw=true
>>>> 
>>>> Under normal circumstances the modem would handover to another radio dot but we did not have any such in the test setup.
>>>> 
>>>> In general, when one begin to discuss delay jitter in 5G, it is important to try to figure out what the root cause is. There can be delay jitter for many different reasons. The most DRX and handover are examples, and then there is retransmissions on the RLC or PDCP layer for various reasons, and buffer status handling... 
>>>> None of the above need to be related to congestion. 
>>>> 
>>>> /Ingemar
>>>> 
>>>>> -----Original Message-----
>>>>> From: tsvwg <tsvwg-bounces@ietf.org> On Behalf Of Huangyihong
>>>>> (Rachel)
>>>>> Sent: Monday, 16 October 2023 13:58
>>>>> To: Ruediger.Geib@telekom.de
>>>>> Cc: tsvwg@ietf.org
>>>>> Subject: Re: [tsvwg] fwd: New Version Notification for
>>>>> draft-huang- tsvwg-transport-challenges-00.txt
>>>>> 
>>>>> Hi Ruediger,
>>>>> 
>>>>> Thanks for the comments, please see inline.
>>>>> 
>>>>> BR,
>>>>> Rachel
>>>>> 
>>>>>> -----邮件原件-----
>>>>>> 发件人: Ruediger.Geib@telekom.de <Ruediger.Geib@telekom.de>
>>>>>> 发送时间: 2023年10月16日 15:33
>>>>>> 收件人: Huangyihong (Rachel) <rachel.huang@huawei.com>
>>>>>> 抄送: tsvwg@ietf.org
>>>>>> 主题: AW: [tsvwg] fwd: New Version Notification for 
>>>>>> draft-huang-tsvwg-transport-challenges-00.txt
>>>>>> 
>>>>>> Hi Rachel,
>>>>>> 
>>>>>> to me as a non-Wireless Expert, the draft assumes knowledge in 
>>>>>> some places I don' have and doesn't seem to offer generally 
>>>>>> applicable
>>>>> references. In detail:
>>>>> 
>>>>> [Rachel]:Wireless is just one of the scenarios in our draft.
>>>>> 
>>>>>> 
>>>>>> - what are long and short RTTs? Ranges in [ms] may help.
>>>>>> - within how many [ms] do network conditions change in a wireless 
>>>>>> network posing the problems you describe?
>>>>> 
>>>>> [Rachel]: What problem are you referring to? This draft describe 
>>>>> several problems. But network conditions change in wireless 
>>>>> network is very common. Based on our experience (HMS services), if 
>>>>> pdv/jitter is 50% of the RTT, the performance would degrade heavily.
>>>>> 
>>>>>> - how prevalent are conditions characterized by fast changing 
>>>>>> capacity in operational wireless networks? Some wireless 
>>>>>> networks, most wireless networks or taking a more user centric 
>>>>>> view, affecting some users or affecting the majority of users (sometimes or steadily)?
>>>>> 
>>>>> [Rachel]: Again, in the case of our HMS service (e.g., Huawei 
>>>>> music), about 5% of all the requests may experience this, i.e., 
>>>>> the flow accomplish time is far large than normal, usually in some 
>>>>> places that the wireless signals are not good enough, e.g., 
>>>>> underground parking places, high-speed railway...
>>>>> 
>>>>>> 
>>>>>> Regards,
>>>>>> 
>>>>>> Ruediger
>>>>>> 
>>>>>> -----Ursprüngliche Nachricht-----
>>>>>> Von: tsvwg <tsvwg-bounces@ietf.org> Im Auftrag von Huangyihong
>>>>>> (Rachel)
>>>>>> Gesendet: Samstag, 14. Oktober 2023 09:15
>>>>>> An: tsvwg <tsvwg@ietf.org>
>>>>>> Betreff: [tsvwg] fwd: New Version Notification for 
>>>>>> draft-huang-tsvwg-transport-challenges-00.txt
>>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> I've posted this draft for a while. It's kind of related to 
>>>>>> network2host and host2network, and why we need it. Basically, it 
>>>>>> lists the transport problems and challenges that we have for now, 
>>>>>> and would like to discuss with people how to solve the problems 
>>>>>> and what kind of technical trend will go. The purpose of this 
>>>>>> draft is to motivate the
>>>>> discussion in this area.
>>>>>> 
>>>>>> Your review and comments are welcome.
>>>>>> 
>>>>>> BR,
>>>>>> Rachel
>>>>>> 
>>>>>> -----邮件原件-----
>>>>>> 发件人: internet-drafts@ietf.org <internet-drafts@ietf.org>
>>>>>> 发送时间: 2023年9月12日 9:59
>>>>>> 收件人: luohanlin (C) <luohanlin2@huawei.com>; Chenqichang (Qichang) 
>>>>>> <chenqichang1@huawei.com>; Huangyihong (Rachel) 
>>>>>> <rachel.huang@huawei.com>; renshoushou <renshoushou@huawei.com>
>>>>>> 主题: New Version Notification for
>>>>>> draft-huang-tsvwg-transport-challenges-00.txt
>>>>>> 
>>>>>> A new version of Internet-Draft
>>>>>> draft-huang-tsvwg-transport-challenges-00.txt
>>>>>> has been successfully submitted by Rachel Huang and posted to the 
>>>>>> IETF repository.
>>>>>> 
>>>>>> Name:     draft-huang-tsvwg-transport-challenges
>>>>>> Revision: 00
>>>>>> Title:    The Challenges that Current Service Transports are Facing
>>>>>> Date:     2023-09-12
>>>>>> Group:    Individual Submission
>>>>>> Pages:    9
>>>>>> URL:
>>>>>> https://www.ietf.org/archive/id/draft-huang-tsvwg-transport-chall
>>>>>> e
>>>>>> n
>>>>>> g
>>>>>> es
>>>>>> -00.txt
>>>>>> Status:
>>>>>> https://datatracker.ietf.org/doc/draft-huang-tsvwg-transport-chal
>>>>>> l
>>>>>> e
>>>>>> n
>>>>>> ge
>>>>>> s/
>>>>>> HTMLized:
>>>>>> https://datatracker.ietf.org/doc/html/draft-huang-tsvwg-transport
>>>>>> -
>>>>>> c
>>>>>> h
>>>>>> al
>>>>>> lenges
>>>>>> 
>>>>>> 
>>>>>> Abstract:
>>>>>> 
>>>>>> This document discusses the challenges for improving the
>>>>> transmission
>>>>>> quality when lack of information between network and application,
>>>>> and
>>>>>> then provide some basic requirements that new synergy mechanisms 
>>>>>> should possess.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> The IETF Secretariat
>>>>>> 
>>>>>> 
>>>> 
>>> 
>> 
>