Re: [tcpm] Fwd: comment about draft-nishida-tcpm-maxwin-02

Yoshifumi Nishida <nishida@sfc.wide.ad.jp> Fri, 20 January 2017 19:32 UTC

Return-Path: <nishida@sfc.wide.ad.jp>
X-Original-To: tcpm@ietfa.amsl.com
Delivered-To: tcpm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id A988C128874 for <tcpm@ietfa.amsl.com>; Fri, 20 Jan 2017 11:32:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -5.099
X-Spam-Level:
X-Spam-Status: No, score=-5.099 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, RP_MATCHES_RCVD=-3.199, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id scfy-qLNXmdX for <tcpm@ietfa.amsl.com>; Fri, 20 Jan 2017 11:32:39 -0800 (PST)
Received: from mail.sfc.wide.ad.jp (shonan.sfc.wide.ad.jp [203.178.142.130]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id EF6311279EB for <tcpm@ietf.org>; Fri, 20 Jan 2017 11:32:38 -0800 (PST)
Received: from mail-ua0-f177.google.com (mail-ua0-f177.google.com [209.85.217.177]) by mail.sfc.wide.ad.jp (Postfix) with ESMTPSA id E643C2786EC for <tcpm@ietf.org>; Sat, 21 Jan 2017 04:32:35 +0900 (JST)
Received: by mail-ua0-f177.google.com with SMTP id y9so70807946uae.2 for <tcpm@ietf.org>; Fri, 20 Jan 2017 11:32:35 -0800 (PST)
X-Gm-Message-State: AIkVDXL4M6Cw6z1rM1bz4kOxLwmCP22K5JMZZvx7UQkrNfNzBQdOrypAh2uEl17faJtVzSMuTtBBnoD8a1mAPQ==
X-Received: by 10.159.38.131 with SMTP id 3mr7518106uay.59.1484940754217; Fri, 20 Jan 2017 11:32:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.159.52.145 with HTTP; Fri, 20 Jan 2017 11:32:33 -0800 (PST)
In-Reply-To: <44c23acf-a271-71e4-3e8d-f1169f2a2f6f@it.uc3m.es>
References: <7df22790-f177-e61f-47a7-10c246f52535@it.uc3m.es> <CAO249yeWXDbpzifzr=7XinO7BYctkVG_F9MEK7SwsUKq_rNw3Q@mail.gmail.com> <CAO249ydbdho0_SfDApjWp7g+AFgSW7gnXAxZS9=tCC8e=m6G-Q@mail.gmail.com> <44c23acf-a271-71e4-3e8d-f1169f2a2f6f@it.uc3m.es>
From: Yoshifumi Nishida <nishida@sfc.wide.ad.jp>
Date: Fri, 20 Jan 2017 11:32:33 -0800
X-Gmail-Original-Message-ID: <CAO249yfw_Bs4-d2muYOhhqv1ZS+5RmK5tUXNa6csUmXQ8mW35Q@mail.gmail.com>
Message-ID: <CAO249yfw_Bs4-d2muYOhhqv1ZS+5RmK5tUXNa6csUmXQ8mW35Q@mail.gmail.com>
To: marcelo bagnulo braun <marcelo@it.uc3m.es>
Content-Type: multipart/alternative; boundary="94eb2c0932a62af0dc05468bb68e"
Cc: "tcpm@ietf.org" <tcpm@ietf.org>
Subject: Re: [tcpm] Fwd: comment about draft-nishida-tcpm-maxwin-02
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tcpm/>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2017 19:32:41 -0000

Hi Marcelo,

Thanks for the comments.

On Thu, Jan 19, 2017 at 11:27 PM, marcelo bagnulo braun <marcelo@it.uc3m.es>
wrote:

> below...
>
> El 19/01/17 a las 22:01, Yoshifumi Nishida escribió:
>
>> Hi Marcelo,
>>
>> Thanks for reading the draft! Sorry, I forgot to CC to tcpm..So, I resent
>> it.
>>
>> On Wed, Jan 18, 2017 at 11:30 PM, marcelo bagnulo braun <
>> marcelo@it.uc3m.es <mailto:marcelo@it.uc3m.es>> wrote:
>>
>>     Hi,
>>
>>     Thanks for writing this draft, seems useful to me.
>>
>>     A couple of minor comments:
>>
>>     - I understand that this extension MUST be used with PAWS,
>>     correct? I mean, using a larger RCVWND without PAWS is likely to
>>     result in inability to distinguish old duplicate packets from new
>>     ones, correct? Should this requirement of this being used in
>>     conjunction with PAWS be explicitly stated?
>>
>>
>> That's right. We presume the use of PAWS here. We can add some texts.
>>
>
> I understand it is not safe to use the increased window without paws, so,
> i would think the draft should sate that PAWS MUST be used with the
> increased window.


Yes, We'll state this point.

>
>>     - If an endpoint that does support this extension communicates
>>     with endpoints that do not support this extension, it will result
>>     in the endpoint that does support the extension to be allocating
>>     the double buffer size for every connection with a legacy endpoint
>>     that it should. I agree that is not fatal, but wouldnt this be a
>>     problem in terms of resources? Would this excess of demand in
>>     resource usage would make adoption of this extension not very
>>     attractive? Maybe it would be better to use the 15 shift count
>>     when the other endpoint also replies with a 15 shift count or
>>     something like this?
>>
>>
>> Yes, we have thought about it before. One potential issue of the approach
>> is it will require other endpoint to use 15 shift count, although it might
>> want to use smaller shift count. WS option exchange is one way
>> notification. So, I just didn't want to use it as a negotiation mechanism
>> to activate the feature.
>>
>
> Right, imposing the use of 15 shift count in a connection endpoint that
> doesnt need it is also a waste of resources, so depending which endpoint is
> more constrained in resources this may or may not be a good approach (and
> we dont know which one is the bottleneck in advance).
>
> Moreover, i would think that supporting very different buffers at the
> endpoints of a connection is very important. I mean, in a connection where
> a server providing content to a client, the client needs the big rcv buffer
> and the server doesnt and it is the server who has the greater demand if it
> is serving a large number of clients.


Yes, I agree.

>
>
> On the other hand, we presumed not so many TCP connections want to use
>> this feature, such as at most 10 connections at one time.
>>
>
> I am uncertain about this argument. I would assume that the bandwidth will
> increase (for links and for connections) and hopefully latency will
> decrease in the future (eliminating bufferbloat and with mechanisms like
> l4s) so i think we should design this to be widely used, dont you think?


I actually didn't think about this point very much.
But, if we can have nice design that can support wider use, yes, it's of
course better.

>
> Also, adding a certain negotiation mechanism to activate the feature could
>> be another way.  (01 version has described the mechanism, which is deleted
>> in 02 version) We've compared these points and have chosen the current
>> mechanism.
>>
>
> Right, just checked this. I am not sure it is worth consuming a new option
> codepoint for this, and especially not the use of the new option in the SYN
> packet, which is already crowded.
>

Yep. That's why we didn't include it in the current version.


> What about this:
> The shift count is expressed in a byte, but only the values between 1 and
> 14 are acceptable as per RFC 7323.
> With this draft, the 15 shift count value would become acceptable, but it
> seems we will never need more than 16 values in the shift count (unless we
> increase the TCP seq number).
> So, this means that we have 4 bits to play with in the shift count field.
> One way of using it would be the following:
> We divide the shift count filed in two fields of 4 bits. The first 4 bits
> are used to express the shift count for legacy (RFC7323) endpoints.
> The last 4 bits are used to express the shift count for endpoints that
> support this specification.
>
> So, this would result in the following combinations:
> - updated client talks to updated server: it expresses its own shift count
> using the last 4 bits, the server replies expressing its own shift count in
> the last 4 bits. Both endpoints understand that they are talking to updated
> endpoints, so they use the new values.
> - updated client talks to legacy server. It expresses its own shift count
> using the last bits. The legacy server receives the option, checks that the
> value is larger than 14 and then uses 14 as shift count for this client.
> The server replies with its own shift count encoded in the first 4 bits.
> The client understands that the server is legacy and uses a 14 shift count
> for its own shift count.
> - legacy client talking to legacy or updated server, behaves as described
> in RFC7323
>
> If we do this, then when both endpoints are updated, one of the endpoints
> can use a smaller shift count even if the other endpoint uses a 15 shift
> count. This doesnt help in the situation where the server is legacy, but at
> least in the long run if this extension is widely deployed, the behaviour
> is the desired one.
>
> what do you think?


I've thought about it and I think it's a pretty good idea. It's more
explicit than the current scheme, but doesn't consume extra option space.
In this scheme, when an updated endpoint talks to a non-updated endpoint,
the shift count will automatically 14. But, I guess this cannot be solved
without adding extra negotiation scheme. So, I think it's acceptable.
One question I have is if we need to use 4bits for this. It might be ok to
use just one bit in the last 4 bits to indicate it supports new version.
But, this might be a very minor point.


>     - The second paragraph in section 4, it talks about "performance
>>     degradation caused by misinterpretation of the shift count" I dont
>>     understand what you are referring to, can you clarify?
>>
>>
>> When an endpoint A offers shift count 15 but a conventional endpoint B
>> regards it as 14, we could have performance degradation issue.
>> Because when A says "I have X<<15 buffer", B will think "A has X<<14
>> buffer".
>>
>> But, when X=65535, we shouldn't see performance degradation issue because
>> 65535<<14 is the max window size that B can support.
>> So, if A allocates 65535<<15 + 65535<<14 buffer, A can always send
>> X=65535 because B's buffer is at most 65535<<14, it cannot consume the
>> buffer at A more than that.
>>
>>
> So, there is no performance degradation issue :-)
> I guess we agree, i guess just find it confusing when the draft talks
> about a performance degradation issue where this is not one.
>

Got it. I might think about finding less confusing texts.
Or, with your proposed scheme, we don't need this part.

Thanks,
--
Yoshi