[core] draft-ietf-core-new-block: Congestion Control
supjps-ietf@jpshallow.com Fri, 23 October 2020 16:20 UTC
Return-Path: <jon.shallow@jpshallow.com>
X-Original-To: core@ietfa.amsl.com
Delivered-To: core@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 62DA33A0FDF for <core@ietfa.amsl.com>; Fri, 23 Oct 2020 09:20:18 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -1.898
X-Spam-Level:
X-Spam-Status: No, score=-1.898 tagged_above=-999 required=5 tests=[BAYES_00=-1.9, HTML_MESSAGE=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=ham autolearn_force=no
Received: from mail.ietf.org ([4.31.198.44]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 0_EVxJuDWT7T for <core@ietfa.amsl.com>; Fri, 23 Oct 2020 09:20:15 -0700 (PDT)
Received: from mail.jpshallow.com (mail.jpshallow.com [217.40.240.153]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ietfa.amsl.com (Postfix) with ESMTPS id 0DD0D3A101F for <core@ietf.org>; Fri, 23 Oct 2020 09:20:14 -0700 (PDT)
Received: from mail2.jpshallow.com ([192.168.0.3] helo=N01332) by mail.jpshallow.com with esmtp (Exim 4.92.3) (envelope-from <jon.shallow@jpshallow.com>) id 1kVznY-0000jL-MG for core@ietf.org; Fri, 23 Oct 2020 17:20:12 +0100
From: supjps-ietf@jpshallow.com
To: core@ietf.org
Date: Fri, 23 Oct 2020 17:19:59 +0100
Message-ID: <118201d6a958$5ac8bed0$105a3c70$@jpshallow.com>
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="----=_NextPart_000_1183_01D6A960.BC8D26D0"
X-Mailer: Microsoft Outlook 14.0
Thread-Index: AdapWFM2SZWQj33aTfeFKldKXygYbQ==
Content-Language: en-gb
Archived-At: <https://mailarchive.ietf.org/arch/msg/core/uoXMI8vcsGoto6fa1GpgftXS-cU>
Subject: [core] draft-ietf-core-new-block: Congestion Control
X-BeenThere: core@ietf.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: "Constrained RESTful Environments \(CoRE\) Working Group list" <core.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/core>, <mailto:core-request@ietf.org?subject=unsubscribe>
List-Archive: <https://mailarchive.ietf.org/arch/browse/core/>
List-Post: <mailto:core@ietf.org>
List-Help: <mailto:core-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/core>, <mailto:core-request@ietf.org?subject=subscribe>
X-List-Received-Date: Fri, 23 Oct 2020 16:20:18 -0000
Hi all, At the virtual meeting on 22nd Oct 2020, I raised the issue of how better to handle Congestion Control once MAX_PAYLOAD (default 10 blocks) had been hit [1] Slide 7. The bottom line issue is there is a delay of ACK_TIMEOUT (CC requirement) after every transmission of MAX_PAYLOAD blocks which is fine. This delay can be reduced by using CON for the MAX_PAYLOAD packet and triggering the send of the next set of data when the ACK is received, but is likely to create other timeout issues in a lossy environment (especially is the loss is un-directional as in DDoS pipe flooding). Use of NON will always have this ACK_TIMEOUT delay unless there is a way stimulating a response from the peer (which may get lost) which can trigger the sending of the next MAX_PAYLOAD packets (or indicate some of the previous packets are missing). It was suggested that the No-Response RFC7967 could be used. I mentioned that I had looked at it, but did not use it for reasons that I could not immediately recall. RFC7967 is not an IETF submission and is Informal. The No-Response is also defined for Requests, but not Responses (needed for Quick-Block2) which is why I did not continue down that route - updating the RFC to include Responses could be an interesting challenge... I am not convinced that supporting Responses with this option would work with the general CoAP estate out there - especially as the RFC is titled 'No Server Response'. Which brings me back to creating another option (not keen on this) or to have a block layout definition that is different for Quick-Block1/2 when compared to Block1/2. My suggestion is then that the block definition layout is NUM R M SZX instead of NUM M SZX, where R bit, if set, requests the peer to send an appropriate response which, if received, can trigger the sending of the next MAX_PAYLOAD packets. [One cannot assume that MAX_PAYLOAD will be the same at both the client and server] Quick-Block1 solicited response would be a 2.31 or 4.08 if some packets were missing. Quick-Block2 solicited response for a NON sent from the server gets more interesting. It should be a NON and a request (unless we want to consider role reversal of client/server at this point). The best I can think of is the client sends off a request for the next MAX_PAYLOADS (i.e. GET request with MAX_PAYLOAD worth of Quick-Block2 options) which is messy. However... It was raised in the meeting about the meaning of "repeat request" in 3.4 [2] . The text does need clarification, but the challenge here is relevant to the previous para above. The scenario is that a GET with Quick-Block2 where block.num is 0 can mean one of 2 things. It can be a request saying "send me all of the date for this resource" (i.e. all of the blocks) or it can be a request for a single block that that did not arrive (but block.num =1 etc. did arrive). The M bit is used to differentiate as to whether the server responds with a single block or a set of blocks. In terms of the solicited response above, the GET request could just contain the next Quick-Block2 needed and the use of the M bit indicates that that it is asking for an individual block or request the net set of blocks. 3.4 [2] current definition of the use of M bit in Quick-Block2 requests may need to be inverted - i.e. unset if a single block is requested and set if it is this block and following blocks requested. [This then does allow Quick-Block2 to request individual blocks at offset X as allowed by Block-2 - I expressed that this might be difficult in the meeting]. I do not know why the Block1/2 option is up to 3 bytes long - this means that block.num can have a maximum value of 2^20-1 - which supports a maximum transfer size of ~1GB if individual block size is 1024. Networks with smaller PDUs will be less because of the block size reduction. Is there any reason as to why this could not be extended to 4 bytes (or completely relaxed to 8 bytes) to support larger transfers? I am open to other suggestions as to how to move forward with potentially reducing the ACK_TIMEOUT CC turnaround requirement if the network can cope. Regards Jon [1] https://www.ietf.org/proceedings/interim-2020-core-11/slides/slides-interim- 2020-core-11-sessa-coap-block-wise-transfer-options-for-faster-transmission- draft-ietf-core-new-block-01-00 [2] https://datatracker.ietf.org/doc/html/draft-ietf-core-new-block-01
- [core] draft-ietf-core-new-block: Congestion Cont… supjps-ietf
- Re: [core] draft-ietf-core-new-block: Congestion … Michael Richardson
- Re: [core] draft-ietf-core-new-block: Congestion … Jon Shallow
- Re: [core] draft-ietf-core-new-block: Congestion … Michael Richardson
- Re: [core] draft-ietf-core-new-block: Congestion … Jon Shallow