Re: [tcpm] Fwd: comment about draft-nishida-tcpm-maxwin-02

marcelo bagnulo braun <marcelo@it.uc3m.es> Fri, 20 January 2017 07:27 UTC

Return-Path: <marcelo@it.uc3m.es>
X-Original-To: tcpm@ietfa.amsl.com
Delivered-To: tcpm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 52DEF129A4E for <tcpm@ietfa.amsl.com>; Thu, 19 Jan 2017 23:27:27 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -2.601
X-Spam-Level:
X-Spam-Status: No, score=-2.601 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001] autolearn=ham autolearn_force=no
Authentication-Results: ietfa.amsl.com (amavisd-new); dkim=pass (2048-bit key) header.d=it-uc3m-es.20150623.gappssmtp.com
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ZcqVFkSQjmax for <tcpm@ietfa.amsl.com>; Thu, 19 Jan 2017 23:27:24 -0800 (PST)
Received: from mail-wm0-x233.google.com (mail-wm0-x233.google.com [IPv6:2a00:1450:400c:c09::233]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 809CB129A41 for <tcpm@ietf.org>; Thu, 19 Jan 2017 23:27:24 -0800 (PST)
Received: by mail-wm0-x233.google.com with SMTP id d140so11814347wmd.0 for <tcpm@ietf.org>; Thu, 19 Jan 2017 23:27:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=it-uc3m-es.20150623.gappssmtp.com; s=20150623; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-transfer-encoding; bh=MiZutZYDRaiJkd7QSgj/brXFP5I6I7zAKRDVzdEYfx8=; b=R/TI/Sr3S6NR3vo7pkQXNw1juHo/M1RyECPvrX5BDBV7bPmDBWXgKUKicZh8+TfecV +kjQpC0SqOKyIvtg3uI4vCXhJPnuAx450XBX6/tGJNMMhOiAgyp270rXArTZUdMw3p/k 1Pc7EMCiwH8u5W4hrPdfV7KDkTvxaJYdaKpzndbwdQdxtCIZrogVn7Qz7t8FiDy77b3q 2VaT+gXNuNjkufFodP/LdEUFoI4H2aYM1JKeaWnWF6MXM83NFrzLyJRxskQZDqPk5RvC ibZgvQe7VZyTtPS85FH+p2BjtUFXCtC2njuJcyMZ/QURRNZPd7puJBZEGOrpgDJ6c7BT eHRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding; bh=MiZutZYDRaiJkd7QSgj/brXFP5I6I7zAKRDVzdEYfx8=; b=q6AvxuGasa0VwVJIL0tOyTa2qZJZvUsiTSAzBLGRycrArIaQg42Edye/8WNIxS4ktw oadCNI6dNrjKNRnNpeEEBl3ByKROOmmIR1/pAx9lUD8g2pJx+CBGb5DTQrabBFUUrQvi bEASYw6BKCqHowH3tzgOWakIt0MUMgWQ+5ZAW0QuCl9ToNuptpnQ3ffZo9NleeRyfVUX wevVDaNMqo7gyXWlvx7bzYSDrYHI83K1icK3eYvlmd7lBxS7592owlV/D5qGRukeDpI+ BrxieA1QYPGZLG8JaZ59B3BiXh+FSBbn6qiyehJem6uz+I0lDWInYEGl8Lh4ExgqnK/e Uaug==
X-Gm-Message-State: AIkVDXLd8UCbDt+n7ex8OEU3+3JfAp7QZLGIQiaxd5vSMRPBAQ6bPGgv7LGbzzibN2PCqPHe
X-Received: by 10.223.153.98 with SMTP id x89mr10193213wrb.181.1484897242596; Thu, 19 Jan 2017 23:27:22 -0800 (PST)
Received: from Macintosh-6.local ([2001:720:410:1010:dcf2:d319:654c:5e05]) by smtp.gmail.com with ESMTPSA id d14sm4099598wmd.19.2017.01.19.23.27.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 19 Jan 2017 23:27:21 -0800 (PST)
To: Yoshifumi Nishida <nishida@sfc.wide.ad.jp>, "tcpm@ietf.org" <tcpm@ietf.org>
References: <7df22790-f177-e61f-47a7-10c246f52535@it.uc3m.es> <CAO249yeWXDbpzifzr=7XinO7BYctkVG_F9MEK7SwsUKq_rNw3Q@mail.gmail.com> <CAO249ydbdho0_SfDApjWp7g+AFgSW7gnXAxZS9=tCC8e=m6G-Q@mail.gmail.com>
From: marcelo bagnulo braun <marcelo@it.uc3m.es>
Message-ID: <44c23acf-a271-71e4-3e8d-f1169f2a2f6f@it.uc3m.es>
Date: Fri, 20 Jan 2017 08:27:19 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:45.0) Gecko/20100101 Thunderbird/45.6.0
MIME-Version: 1.0
In-Reply-To: <CAO249ydbdho0_SfDApjWp7g+AFgSW7gnXAxZS9=tCC8e=m6G-Q@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Transfer-Encoding: 8bit
Archived-At: <https://mailarchive.ietf.org/arch/msg/tcpm/6AYXejwBhKbFWbHdflZPfDEHQxI>
Subject: Re: [tcpm] Fwd: comment about draft-nishida-tcpm-maxwin-02
X-BeenThere: tcpm@ietf.org
X-Mailman-Version: 2.1.17
Precedence: list
List-Id: TCP Maintenance and Minor Extensions Working Group <tcpm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/tcpm>, <mailto:tcpm-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/tcpm/>
List-Post: <mailto:tcpm@ietf.org>
List-Help: <mailto:tcpm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/tcpm>, <mailto:tcpm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 20 Jan 2017 07:27:27 -0000

below...

El 19/01/17 a las 22:01, Yoshifumi Nishida escribió:
> Hi Marcelo,
>
> Thanks for reading the draft! Sorry, I forgot to CC to tcpm..So, I 
> resent it.
>
> On Wed, Jan 18, 2017 at 11:30 PM, marcelo bagnulo braun 
> <marcelo@it.uc3m.es <mailto:marcelo@it.uc3m.es>> wrote:
>
>     Hi,
>
>     Thanks for writing this draft, seems useful to me.
>
>     A couple of minor comments:
>
>     - I understand that this extension MUST be used with PAWS,
>     correct? I mean, using a larger RCVWND without PAWS is likely to
>     result in inability to distinguish old duplicate packets from new
>     ones, correct? Should this requirement of this being used in
>     conjunction with PAWS be explicitly stated?
>
>
> That's right. We presume the use of PAWS here. We can add some texts.

I understand it is not safe to use the increased window without paws, 
so, i would think the draft should sate that PAWS MUST be used with the 
increased window.

>
>     - If an endpoint that does support this extension communicates
>     with endpoints that do not support this extension, it will result
>     in the endpoint that does support the extension to be allocating
>     the double buffer size for every connection with a legacy endpoint
>     that it should. I agree that is not fatal, but wouldnt this be a
>     problem in terms of resources? Would this excess of demand in
>     resource usage would make adoption of this extension not very
>     attractive? Maybe it would be better to use the 15 shift count
>     when the other endpoint also replies with a 15 shift count or
>     something like this?
>
>
> Yes, we have thought about it before. One potential issue of the 
> approach is it will require other endpoint to use 15 shift count, 
> although it might want to use smaller shift count. WS option exchange 
> is one way notification. So, I just didn't want to use it as a 
> negotiation mechanism to activate the feature.

Right, imposing the use of 15 shift count in a connection endpoint that 
doesnt need it is also a waste of resources, so depending which endpoint 
is more constrained in resources this may or may not be a good approach 
(and we dont know which one is the bottleneck in advance).

Moreover, i would think that supporting very different buffers at the 
endpoints of a connection is very important. I mean, in a connection 
where a server providing content to a client, the client needs the big 
rcv buffer and the server doesnt and it is the server who has the 
greater demand if it is serving a large number of clients.

> On the other hand, we presumed not so many TCP connections want to use 
> this feature, such as at most 10 connections at one time.

I am uncertain about this argument. I would assume that the bandwidth 
will increase (for links and for connections) and hopefully latency will 
decrease in the future (eliminating bufferbloat and with mechanisms like 
l4s) so i think we should design this to be widely used, dont you think?

> Also, adding a certain negotiation mechanism to activate the feature 
> could be another way.  (01 version has described the mechanism, which 
> is deleted in 02 version) We've compared these points and have chosen 
> the current mechanism.

Right, just checked this. I am not sure it is worth consuming a new 
option codepoint for this, and especially not the use of the new option 
in the SYN packet, which is already crowded.

What about this:
The shift count is expressed in a byte, but only the values between 1 
and 14 are acceptable as per RFC 7323.
With this draft, the 15 shift count value would become acceptable, but 
it seems we will never need more than 16 values in the shift count 
(unless we increase the TCP seq number).
So, this means that we have 4 bits to play with in the shift count field.
One way of using it would be the following:
We divide the shift count filed in two fields of 4 bits. The first 4 
bits are used to express the shift count for legacy (RFC7323) endpoints.
The last 4 bits are used to express the shift count for endpoints that 
support this specification.

So, this would result in the following combinations:
- updated client talks to updated server: it expresses its own shift 
count using the last 4 bits, the server replies expressing its own shift 
count in the last 4 bits. Both endpoints understand that they are 
talking to updated endpoints, so they use the new values.
- updated client talks to legacy server. It expresses its own shift 
count using the last bits. The legacy server receives the option, checks 
that the value is larger than 14 and then uses 14 as shift count for 
this client. The server replies with its own shift count encoded in the 
first 4 bits. The client understands that the server is legacy and uses 
a 14 shift count for its own shift count.
- legacy client talking to legacy or updated server, behaves as 
described in RFC7323

If we do this, then when both endpoints are updated, one of the 
endpoints can use a smaller shift count even if the other endpoint uses 
a 15 shift count. This doesnt help in the situation where the server is 
legacy, but at least in the long run if this extension is widely 
deployed, the behaviour is the desired one.

what do you think?


>     - The second paragraph in section 4, it talks about "performance
>     degradation caused by misinterpretation of the shift count" I dont
>     understand what you are referring to, can you clarify?
>
>
> When an endpoint A offers shift count 15 but a conventional endpoint B 
> regards it as 14, we could have performance degradation issue.
> Because when A says "I have X<<15 buffer", B will think "A has X<<14 
> buffer".
>
> But, when X=65535, we shouldn't see performance degradation issue 
> because 65535<<14 is the max window size that B can support.
> So, if A allocates 65535<<15 + 65535<<14 buffer, A can always send 
> X=65535 because B's buffer is at most 65535<<14, it cannot consume the 
> buffer at A more than that.
>

So, there is no performance degradation issue :-)
I guess we agree, i guess just find it confusing when the draft talks 
about a performance degradation issue where this is not one.

Regards, marcelo


> Thanks,
> --
> Yoshi
>